DevOps spends on average 25 hours per week using, monitoring, and supporting tools and solutions.
With the rise of machine learning and artificial intelligence, it’s not much of a stretch to assume that in the not-too-distant future, data centers may rely heavily on automation to manage infrastructure and server health and performance.
But we’re not there yet. According to a report released this week by Intel’s Data Center Manager (DCM) group, DevOps spends on average 25 hours per week using, monitoring, and supporting the tools and solutions that manage server and infrastructure performance.
Intel DCM – the group within Intel that runs its Data Center Manager tools and API set – launched around five years ago to start baking additional telemetry and sensors into data center hardware, Intel GM Data Center Solutions Jeff Klaus tells Talkin’ Cloud.
“We started the DCM project aggregating and collecting all of this telemetry, and putting it into a form that the user could easily generate results or draw conclusions based on their problematic use cases that they would encounter,” he says. Now its clients and partners are asking for more.
“There is still a lot of time spent tuning the infrastructure,” Klaus says, referencing the survey. “In some respects it’s good because it means the customers are using our tools but I think there’s an opportunity here to have more – it might not necessarily mean automation yet, but at least more standardized reporting and analysis that will catch red flags.”
Klaus says Intel has invested more in health information because of requests from customers who want to have a better understanding of whether SSD or storage devices are healthy, if their CPUs are cycling at normal levels, or need help locating newer servers that can run workloads more efficiently.
“Today we see some environments that are very progressive like an Amazon that are looking for tools for this, but the rest of the pack seems to be further behind,” Klaus says.
In what Klaus calls “unsophisticated environments” – where operators are running workloads on older equipment – there are not a lot of changes being made to the infrastructure, which means workloads are running inefficiently, even if there is newer equipment available.
“They don’t see the data that their newer servers could run these workloads 4x, 5x more efficiently,” he says.
While work is being done in this area, and it’s certainly a focus for Intel DCM, the report concludes that there need to be better tools for the job.