What is the Difference Between CMDB and Asset Management?

As originally posted in IT Toolbox, Mark Gaydos, Chief Marketing Officer at Nlyte Software, dispels the confusion between a CMDB and ITAM. Our industry is full of acronyms and these two terms are often misused. Their similar traits reside in the fact that each one is a repository of valuable information, used to improve business services.  However, it’s the type of information and what that data is used for, which makes each one unique.

This is a “must-read” for anyone looking to draw the line between CMBD and ITAM solutions. Because each coincides to properly track contracts, devices and help ensure IT assets are better protected, but one is a pure business play and the other more valuable to IT-related endeavors.

Do you know the difference?

Read this blog to see if you’re correct or to learn how different they really are.

____________________________________________________________________________________________

There is a wealth of valuable data, neatly contained in system warehouses, with labels such as Configuration Management Database (CMDB) and Information Technology Asset Management (ITAM). These are not redundant repositories. Each system holds individual “truths” with a usefulness that is specific to business-related endeavors or IT services. Knowing the differences between the two will ease business processes as well as support the IT infrastructure with an added level of security. Each has a home in any mid to large size organization.

CMDB and ITAM-Dependance or Mutually Exclusive?

When it comes to managing IT assets, there’s no shortage of solution acronyms available.  Perhaps the ones that cause the most confusion are Configuration Management Database (CMDB) and Information Technology Asset Management (ITAM). Both solutions provide visibility into IT assets to help organizations better optimize and manage business services. So, what is the difference between one acronym and the other?

One of the biggest differences resides in the fact that the CMDB data warehouse contains information typically only applicable to IT needs. Whereas the ITAM solution contains information valuable to many other departments within a company. Each holds individual and similar values to an organization, it all depends on what the particular needs are and what information is permanent to making valuable decisions. You just need to decide what repository to dip your ladle into.

The Difference Between CMBD and ITAM

A CMDB is a repository for managing and maintaining Configuration Items (CIs). Think of CI’s as items that are used as part of an Information Technology Infrastructure Library (ITIL), with an added value for IT Service Management (ITSM) needs—just to bring a few more acronyms into play. Information in the CMDB database is typically only important for IT needs and contains a good deal of change and configuration management details, e.g., CIs. It helps to think of the CI as an architecture that must be constantly maintained in order to be a valuable contributor to IT services. Its value resides within the core of an ITSM solution and only includes assets relevant to that solution’s needs. It’s a flat database that receives information but does not actively gather relational hardware or software data—placing the information in danger of being outdated quickly if not maintained regularly.

On the other hand, ITAM is more valuable as an overall business service, as it contains information on devices connected to the network and is valuable to non-IT-related departments. Think of the ITAM solution as an IT repository of real-time network-connected asset details, ready to be quarried into reports for finance, legal, compliance, and many other business stakeholders. It is also important to note that an ITAM solution can synchronize relevant information with CMBDs, but maintains its own separate data repository for business-related needs. ITAM solutions also avoid the danger of containing stale data because they continually reach out across the network to automatically gather the most recent information.

People and Processes

If organizations are struggling over which solution provides the greatest value, the answer is actually obtained by asking a question: Who is looking for what type of information? If you’re in finance, human resources, legal or even IT security, you have a need for continuous, real-time information that can be obtained through an ITAM solution, pinging every device connected to the network. By contrast, if you need IP information (laptops) and application dependency details, then a CMDB is what you’re looking for.

The two solutions are not mutually exclusive and bring their own individual value to different parts of the organization. The common features between them are licensing data, hardware inventory details and software information. But the CMDB solution takes it one step further by adding subnet as well as RESTful API detail—stuff that makes the business stakeholder’s eyes glaze over.

Create a Clear Strategy

Most modern business services start out as best IT practices, and an organization’s reliance on relevant data to make decisions has never been greater. Companies need a clear strategy to collect and organize data for tactical purposes—both the CMDB and ITAM solutions are applicable allies for service strategies.

It also helps to adopt the realization that everything in an organization has its own individual life-cycle and this includes IT assets as well as employees. Both areas are understandably intertwined and dependent on one another to complete assigned tasks. There are many tools available to guide each one of them to enhance output and value. The data from CMDB and ITAM solutions have become just as important to making informed business decisions as SAP or Salesforce reports. Both solutions should coincide to properly track contracts, devices and help ensure IT assets are better protected from disruptive internal and external influences.

The post What is the Difference Between CMDB and Asset Management? appeared first on Nlyte.

Source: Nlyte
As originally posted in IT Toolbox, Mark Gaydos, Chief Marketing Officer at Nlyte Software, dispels the confusion between a CMDB and ITAM. Our industry is full of acronyms and these two terms are often misused. Their similar traits reside in the fact that each one is a repository of valuable information, used to improve business services.  However, it’s the type of information and what that data is used for, which makes each one unique. This is a “must-read” for anyone looking to draw the line between CMBD and ITAM solutions. Because each coincides to properly track contracts, devices and help ensure IT assets are better protected, but one is a pure business play and the other more valuable to IT-related endeavors. Do you know the difference? Read this blog to see if you’re correct or to learn how different they really are. ____________________________________________________________________________________________ There is a wealth of valuable data,…

Posted in Uncategorized | Tagged | Leave a comment

New Relic Announces Second Quarter Fiscal Year 2020 Results

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

Hybridization of the Enterprise Compute Infrastructure

The State of Hybrid Infrastructure

From pets, plants in our garden, to the technology landscape, everything is getting hybridized. In the Information Technology world, the hybrid cloud (or compute infrastructure if you wish) is defined as interconnected on-premise, shared, and cloud resources, simultaneously using collective resources. As noted in a survey “Voice of the Enterprise” conducted by the 451 Research Group, nearly 60% of organizations indicate that they are establishing a hybrid cloud environment. This means they are leveraging existing data center assets (traditional and private cloud), colocation, and cloud resources to meet various performance requirements. What seems a bit surprising, based on anecdotal conversations, only 18% of the participants in the survey indicated that the public cloud would become their soul computing platform.

Hybrid Cloud investment predictions for 2019 according to 451 Research Group

• Enterprise data centers show 2.5% growth rate, which is down from 6% two years ago
• Colocation and Managed Service facilities are increasing 8.1%, surpassing enterprise owned assets
• Cloud infrastructure is growing by 18% which is expected, but showing signs of slower growth

Tracking data center growth and density the industry tracking sight datacentermap.com gives a great visualization of private, shared, and public cloud-based data centers.

Drivers and Demands for the Hybrid Cloud

• Edge computing demand is driven by low latency/high bandwidth data demands from autonomous vehicles, on-line transactions, to IoT devices
• Cost management of applications where IT organizations are optimizing workloads based on infrastructure and networking costs, risk management, and performance demands
• Performance factors drive workload placement – distributed computing for high transactional requirements; traditional centralized infrastructure for compute intense batch reporting and analytics
• Security and compliance drive workload placement. Traditionally, it was believed that an organization’s own data center offered more security and auditability. However, as these facilities age, colocation and public cloud vendors are adopting modern and more sophisticated systems.

Trends
Colocation services have evolved over the years from the fortress style data centers to a highly connected internet exchange hub. We are now seeing that further evolve to host more complex physical and virtual infrastructure capable of supporting the workload mobility demanded by the Hybrid Cloud and Edge computing needs.

With this maturing or evolution competition among Colocation providers is increasing. They now need to add unique services to entice and maintain their tenants. This competition is good for both the customer and the colocation providers themselves. Enhanced services bring significant value to the tenant without paying for the high cost of innovation. Improved services help bring operating costs down while driving more customer demand.

Colocation vendors are starting to provide more services and transparency to support the tenant’s workload management. To manage workloads in this complex hybrid environment, organizations need visibility into the entire stack, where is it, where does it run best, where it is the best security profile, where does it cost least to run.

Complex environments have millions of inputs from devices and sensors daily. It is impossible to manually monitor and react to the valuable data that is available. Artificial Intelligence (AI) and machine learning are being adopted to provide advanced to help innumerate all of the outcomes for smarter decision making and improve failure predictability.

The demand to optimize and tier workloads are demanding more in-depth management tools to assist. DCIM tools are proving the ideal solution for both tenants and the colocation provider to understand the nature of a workload and identify where it should move to, based on the parameters of cost, performance, and risk.

7 DCIM Considerations When Choosing a Colocation Provider

1. Is the location easily accessible to servicing vendors and employee access, as well as are there costs and risks driven by environmental conditions?
2. Are the power supply and costs reliable?
3. Is there appropriate cooling and PUE management to keep costs consistent and predictable while reducing overheating risks?
4. Does the host use Data Center Infrastructure Management tools, and can you interface with them to monitor and manage your environment?
5. Is there adequate physical security and reliable audit trail documentation?
6. Is there a comprehensive and integrated workload and workflow system in place?
7. Is the Colocation vendor’s Service Level Agreement in alignment with the tier requirements of the applications placed there, and are they transparent regarding SLA performance?

Challenges
The complexity of technology and the hybridization of the compute infrastructure is requiring new skill sets and a broader depth of talent that organizations are challenged to find. These factors are driving the need for automation, as found in DCIM, and a resurgence in engagement with integrators such as IBM, Atos, and Accenture.

Quality of connectivity is critical more so now than ever. Organizations need to know latency, jitter, and packet loss to ensure predictable quality data transactions. To achieve quality network connectivity, a dedicated and secure connection not typically found in an IP connection is required. These connections can be very costly and time-consuming to manage. Colocation providers can provide these and grantee them with their SLAs.

The growing diverse footprint of the hybrid cloud, demand consolidated visibility for capacity planning and workload placement and optimization. DCIM helps provide the visibility to manage these workloads and understands them to automate the workflows needed to manage them across the hybrid environment.

New technology, available skills, and lacking DevOp tools challenge the move to hybrid cloud, adding complexity to capacity management. There are more and more computing platforms demanding the metrics to know where to place a workload. More and more applications are getting placed in nonoptimal platforms. There is a need to know resource availability and how much subscription to commit to for maximum optimization of workloads.

What To Do Now
When selecting a colocation partner, first understand the 7 Key Best Practices. View them as an extension of your own compute infrastructure. Make sure they provide tools giving you visibility, management, and validation of SLA of your real estate. Ideally find a Colocation partner that provides a portal that allows you to manage your environment and integrate with your own DCIM solution.

The post Hybridization of the Enterprise Compute Infrastructure appeared first on Nlyte.

Source: Nlyte
The State of Hybrid Infrastructure From pets, plants in our garden, to the technology landscape, everything is getting hybridized. In the Information Technology world, the hybrid cloud (or compute infrastructure if you wish) is defined as interconnected on-premise, shared, and cloud resources, simultaneously using collective resources. As noted in a survey “Voice of the Enterprise” conducted by the 451 Research Group, nearly 60% of organizations indicate that they are establishing a hybrid cloud environment. This means they are leveraging existing data center assets (traditional and private cloud), colocation, and cloud resources to meet various performance requirements. What seems a bit surprising, based on anecdotal conversations, only 18% of the participants in the survey indicated that the public cloud would become their soul computing platform. Hybrid Cloud investment predictions for 2019 according to 451 Research Group • Enterprise data centers show 2.5% growth rate, which is down from 6% two years…

Posted in Uncategorized | Tagged | Leave a comment

Global Survey Reveals Key Challenges and Technologies Expected to Drive the Next Phase of Digital Transformation

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

Gregory Ouillon Joins New Relic as EMEA Field Chief Technology Officer

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

New Relic Announces Date of Second Quarter Fiscal Year 2020 Financial Results Conference Call

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

Working the Kinks Out of Workloads

There are many challenges data center and colocation facility operators face—every day—when ensuring workloads are running smoothly. One of the biggest challenges is gaining complete visibility into every device connected to the network. It sounds like a simple “pinging” process but in reality, it’s a difficult-to-achieve realization.

To help IT managers, Ping! Zine turns to Nlyte for a reality check on best practices to enable 100% visibility into connected devices as well as the status of software licenses and those who have device configuration abilities.

As originally published by Ping! Zine, Nlyte’s CMO, Mark Gaydos, outlines the challenges and underscores the solutions for unlocking the elusive truth—so needed to ensure workloads are smoothly processed and digital assets better protected. 

Read Mark’s best practices below.

 _____________________________________________________________________________

As we look at the issues data centers will face in 2019, it’s clear that it’s not all about power consumption. There is an increasing focus on workloads, but, unlike in the past, these workloads are not contained within the walls of a single facility rather, they are scattered across multiple data centers, co-location facilities, public clouds, hybrid clouds, and the edge. In addition, there has been a proliferation of devices scattered from microdata centers down to IoT sensors that are utilized by agriculture, smart cities, restaurants, and healthcare. Due to this sprawl, IT infrastructure managers will need better visibility into the end-to-end network to ensure smooth workload processing.

If data center managers fail to obtain a more in-depth understanding of what is happening in the network, applications will begin to lag, security problems due to old versions of firmware will arise and non-compliance issues will be experienced. Inevitably, those data center managers who choose to not obtain a deep level of operational understanding will find their facilities in trouble because they don’t have the visibility and metrics needed to see what’s really happening.

You Can’t Manage What You Don’t Know

In addition to the aforementioned issues, if the network is not properly scrutinized with a high level of granularity, operating costs will begin to increase because it will become more and more difficult to obtain a clear understanding of all hardware and software pieces that are now sprawled out to the computing edge. Managers will always be held accountable for all devices and software running on the network no matter where it is located. However, those managers who are savvy enough to deploy a technology asset management system (TAM) will avoid many hardware and software problems with the ability to collect more in-depth information. With more data collected, these managers now have a single source of truth—for the entire network—to better manage security, compliance, and software licensing.

Additionally, a full understanding of the devices and configurations responsible for processing workloads across this diverse IT ecosystem will help applications run smoothly. Managers need a TAM solution to remove many challenges that inhibit a deep dive into the full IT ecosystem because today, good infrastructure management is no longer only about the cabling and devices neatly stacked within the racks. Now, data center managers need to grasp how a fractured infrastructure, spread across physical and virtual environments, is still a unified entity that impacts all workloads and application performance.

Finding the Truth in Data

The ability to view a single source of truth gleaned from data gathered across the entire infrastructure sprawl, will also help keep OPEX costs in check. Deploying a TAM solution combines financial, inventory and contractual functions to optimize spending and support lifecycle management. Being armed with this enhanced data set promotes strategic, balance sheet decisions.

Data center managers must adjust how they view and interact with their total operations. It’s about looking at those operations from the applications first—where they’re running—then tracing it back through the infrastructure. With a macro point-of-view, managers will now be better equipped to optimize the workloads, at the lowest cost, while also ensuring the best service level agreements possible.

It’s true, no two applications ever run alike. Some applications may need to be in containers or special environments due to compliance requirements and others may move around. An in-depth understanding of the devices and the workloads that process these applications, is critically important because you do not want to make wrong decisions, and put an application into a public cloud when it must have the security and/or compliance required from a private cloud.

Most organizations will continue to grow in size and as they do, the IT assets required to support operations will also increase in number. Using a technology asset management system as the single source of truth is the best way to keep track and maintain assets regardless of where they are residing on today’s virtual or sprawled-out networks. Imagine how difficult it would be to find these answers if your CIO or CFO came to you and asked the following questions—without a TAM solution in place:

  • Are all our software licenses currently being used and are they all up to date?
  • How many servers do we have running now and how many can we retire next quarter?
  • Our ERP systems are down and the vendor says we owe them $1M in maintenance fees before they help us. Is this correct?

IT assets will always be dynamic and therefore must be meticulously tracked all the time. Laptops are constantly on the move, servers are shuffled around or left in a depleted zombie state and HR is constantly hiring or letting employees go. Given that data center managers must now share IT asset information with many business units, it’s imperative that a fresh list is continually maintained.

We are all embarking upon a new digital world where the essence of network performance resides on having a level of interrelationship understand for hardware to software, that previous IT managers never had to contend with. Leveraging new tools for complete network and workload visibility will provide the full transparency necessary to ensure smooth operations in our distributed IT ecosystem.

The post Working the Kinks Out of Workloads appeared first on Nlyte.

Source: Nlyte
There are many challenges data center and colocation facility operators face—every day—when ensuring workloads are running smoothly. One of the biggest challenges is gaining complete visibility into every device connected to the network. It sounds like a simple “pinging” process but in reality, it’s a difficult-to-achieve realization. To help IT managers, Ping! Zine turns to Nlyte for a reality check on best practices to enable 100% visibility into connected devices as well as the status of software licenses and those who have device configuration abilities. As originally published by Ping! Zine, Nlyte’s CMO, Mark Gaydos, outlines the challenges and underscores the solutions for unlocking the elusive truth—so needed to ensure workloads are smoothly processed and digital assets better protected.  Read Mark’s best practices below.  _____________________________________________________________________________ As we look at the issues data centers will face in 2019, it’s clear that it’s not all about power consumption. There is an increasing…

Posted in Uncategorized | Tagged | Leave a comment

New Relic Delivers Industry’s First Observability Platform That Is Open, Connected and Programmable, Enabling Companies to Create More Perfect Software

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

New Relic Announces Management Changes – Michael Christenson Will Join as President, Chief Operating Officer

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

Asset Management Intelligence; What You Don’t Know, Will Cost You

Have you ever made an inquiry about the status of a server or application to your IT staff and received a less than a comforting reply?  Clearly, this is not the zone you want to find yourself in. A lack of in-depth knowledge regarding what is connected, who has access and which software version is running on the network will lead to security breaches, unnecessarily high operating cost and vendors taking advantage of you.

Unfortunately, this scenario is commonplace. To help find an answer, Data Center Knowledge turns to Nlyte’s Chief Marketing Officer, Mark Gaydos, for some real-world advice to give to readers.

As originally posted in Data Center Knowledge, read below, how to arm your IT staff with the visibility needed to truly understand what is happening–anywhere–on the network.

Asset Management Intelligence; What you don’t know, will cost you.

“What do you know about your computing infrastructure?”

It’s a question that is often asked but seldom replied to in a sufficient manner by IT staff. In fact, most replies seem to contain the phrase, “we think.” When it comes to important questions on receiving information from directory services or contract sizes, “we think,” will not cut it. In order to properly answer these questions, IT needs the ability to gather data from a large host of assets and more importantly—consolidate and reconcile the data so it all makes sense for human resources, financial, legal and other stakeholders within the company that requires current information.

Collecting data is only the first step. Once the necessary data is in one system, asset intelligence can be leveraged to serve a number of vital needs. For example, achieving a greater understanding of a company’s vendors in order to negotiate with confidence. Asset intelligence uncovers all the hardware and software details to enable confidence, such as empowering a purchasing department to negotiate more favorable prices from contracted vendors.

Utilizing asset management intelligence is not only about vendor negotiations, but it also has many other practical applications because the data is always fresh—it’s the single-source-of-truth to an ever-changing inventory. Imagine having information on who is using what software, on what desktops and when the last patch was applied at your fingertips. For example, if there is a Microsoft fix or a big Intel patch for a CPU problem, how do you know how many systems actually received the patch?

Then there is attempting to control maintenance renewals. Do you know what you’re renewing or are you just signing a check? Applying asset management techniques, you can reconcile the necessary data and cross-reference it with contractual obligations to determine if a vendor is charging you too much.

The fact is: IT has become responsible for finding things nobody knows about and older, legacy asset management systems are of little use because they don’t have these advanced capabilities. Change management applications can help to an extent, but 3% to 4% of technology assets are fantom, which means, that nobody knows about them. Yet, they are still connected to and running on the network—change management applications will never see them.

Technology Asset Management Solutions provide the Single-Source-of-Truth

By contrast, today’s asset management solutions go far beyond change management capabilities. They offer granular information such as the ability to geo-locate a device, identify a subnet in prefixing and link it back to the core system. This is important because fresh data is now required by all parts of the organization; service management wants to know the configuration of the hardware in the operating systems; facilities need to know how many servers there are to properly power the racks, as well as the mergers & acquisitions that need data to know exactly what they’re acquiring. IT managers who are savvy enough to deploy a technology asset management system (TAM) will have that single-source-of-truth to better manage security, compliance, and software licensing.

Additionally, a full understanding of the devices and configurations responsible for processing workloads across this diverse IT ecosystem will help applications run smoothly. IT managers need a TAM solution to remove many challenges that inhibit a deep dive into the full IT ecosystem because today, good infrastructure management is no longer only about the cabling and devices neatly stacked within the racks. Now, data center managers need to grasp how a fractured infrastructure, spread across physical and virtual environments, is still a unified entity that impacts all workloads and application performance.

Conclusion

One of the biggest problems vexing IT is: you can’t manage what you don’t know about.

And now, the IoT economy is exacerbating the device and application problem because

everything is literally connected to the network, water metering, home oil tank gauges, and even rodent traps. It’s not uncommon to see a hundred-fifty to two hundred devices on the subnets because everything is now connected. If you don’t have visibility nor inventory of these devices, you will have problems. Without this information, items are invisible on your network and hackers who are wise to this will exploit these vulnerabilities.

The situation is similar to installing a security system in your house. If you don’t know how many doors and how many windows you have—you are vulnerable. TAM solutions solve the vulnerability issue with visibility as well as turning isolated data into actionable information vital for many other operations.

The post Asset Management Intelligence; What You Don’t Know, Will Cost You appeared first on Nlyte.

Source: Nlyte
Have you ever made an inquiry about the status of a server or application to your IT staff and received a less than a comforting reply?  Clearly, this is not the zone you want to find yourself in. A lack of in-depth knowledge regarding what is connected, who has access and which software version is running on the network will lead to security breaches, unnecessarily high operating cost and vendors taking advantage of you. Unfortunately, this scenario is commonplace. To help find an answer, Data Center Knowledge turns to Nlyte’s Chief Marketing Officer, Mark Gaydos, for some real-world advice to give to readers. As originally posted in Data Center Knowledge, read below, how to arm your IT staff with the visibility needed to truly understand what is happening–anywhere–on the network. Asset Management Intelligence; What you don’t know, will cost you. “What do you know about your computing infrastructure?” It’s a question…

Posted in Uncategorized | Tagged | Leave a comment