New Relic Announces Date of Second Quarter Fiscal Year 2020 Financial Results Conference Call

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

Working the Kinks Out of Workloads

There are many challenges data center and colocation facility operators face—every day—when ensuring workloads are running smoothly. One of the biggest challenges is gaining complete visibility into every device connected to the network. It sounds like a simple “pinging” process but in reality, it’s a difficult-to-achieve realization.

To help IT managers, Ping! Zine turns to Nlyte for a reality check on best practices to enable 100% visibility into connected devices as well as the status of software licenses and those who have device configuration abilities.

As originally published by Ping! Zine, Nlyte’s CMO, Mark Gaydos, outlines the challenges and underscores the solutions for unlocking the elusive truth—so needed to ensure workloads are smoothly processed and digital assets better protected. 

Read Mark’s best practices below.

 _____________________________________________________________________________

As we look at the issues data centers will face in 2019, it’s clear that it’s not all about power consumption. There is an increasing focus on workloads, but, unlike in the past, these workloads are not contained within the walls of a single facility rather, they are scattered across multiple data centers, co-location facilities, public clouds, hybrid clouds, and the edge. In addition, there has been a proliferation of devices scattered from microdata centers down to IoT sensors that are utilized by agriculture, smart cities, restaurants, and healthcare. Due to this sprawl, IT infrastructure managers will need better visibility into the end-to-end network to ensure smooth workload processing.

If data center managers fail to obtain a more in-depth understanding of what is happening in the network, applications will begin to lag, security problems due to old versions of firmware will arise and non-compliance issues will be experienced. Inevitably, those data center managers who choose to not obtain a deep level of operational understanding will find their facilities in trouble because they don’t have the visibility and metrics needed to see what’s really happening.

You Can’t Manage What You Don’t Know

In addition to the aforementioned issues, if the network is not properly scrutinized with a high level of granularity, operating costs will begin to increase because it will become more and more difficult to obtain a clear understanding of all hardware and software pieces that are now sprawled out to the computing edge. Managers will always be held accountable for all devices and software running on the network no matter where it is located. However, those managers who are savvy enough to deploy a technology asset management system (TAM) will avoid many hardware and software problems with the ability to collect more in-depth information. With more data collected, these managers now have a single source of truth—for the entire network—to better manage security, compliance, and software licensing.

Additionally, a full understanding of the devices and configurations responsible for processing workloads across this diverse IT ecosystem will help applications run smoothly. Managers need a TAM solution to remove many challenges that inhibit a deep dive into the full IT ecosystem because today, good infrastructure management is no longer only about the cabling and devices neatly stacked within the racks. Now, data center managers need to grasp how a fractured infrastructure, spread across physical and virtual environments, is still a unified entity that impacts all workloads and application performance.

Finding the Truth in Data

The ability to view a single source of truth gleaned from data gathered across the entire infrastructure sprawl, will also help keep OPEX costs in check. Deploying a TAM solution combines financial, inventory and contractual functions to optimize spending and support lifecycle management. Being armed with this enhanced data set promotes strategic, balance sheet decisions.

Data center managers must adjust how they view and interact with their total operations. It’s about looking at those operations from the applications first—where they’re running—then tracing it back through the infrastructure. With a macro point-of-view, managers will now be better equipped to optimize the workloads, at the lowest cost, while also ensuring the best service level agreements possible.

It’s true, no two applications ever run alike. Some applications may need to be in containers or special environments due to compliance requirements and others may move around. An in-depth understanding of the devices and the workloads that process these applications, is critically important because you do not want to make wrong decisions, and put an application into a public cloud when it must have the security and/or compliance required from a private cloud.

Most organizations will continue to grow in size and as they do, the IT assets required to support operations will also increase in number. Using a technology asset management system as the single source of truth is the best way to keep track and maintain assets regardless of where they are residing on today’s virtual or sprawled-out networks. Imagine how difficult it would be to find these answers if your CIO or CFO came to you and asked the following questions—without a TAM solution in place:

  • Are all our software licenses currently being used and are they all up to date?
  • How many servers do we have running now and how many can we retire next quarter?
  • Our ERP systems are down and the vendor says we owe them $1M in maintenance fees before they help us. Is this correct?

IT assets will always be dynamic and therefore must be meticulously tracked all the time. Laptops are constantly on the move, servers are shuffled around or left in a depleted zombie state and HR is constantly hiring or letting employees go. Given that data center managers must now share IT asset information with many business units, it’s imperative that a fresh list is continually maintained.

We are all embarking upon a new digital world where the essence of network performance resides on having a level of interrelationship understand for hardware to software, that previous IT managers never had to contend with. Leveraging new tools for complete network and workload visibility will provide the full transparency necessary to ensure smooth operations in our distributed IT ecosystem.

The post Working the Kinks Out of Workloads appeared first on Nlyte.

Source: Nlyte
There are many challenges data center and colocation facility operators face—every day—when ensuring workloads are running smoothly. One of the biggest challenges is gaining complete visibility into every device connected to the network. It sounds like a simple “pinging” process but in reality, it’s a difficult-to-achieve realization. To help IT managers, Ping! Zine turns to Nlyte for a reality check on best practices to enable 100% visibility into connected devices as well as the status of software licenses and those who have device configuration abilities. As originally published by Ping! Zine, Nlyte’s CMO, Mark Gaydos, outlines the challenges and underscores the solutions for unlocking the elusive truth—so needed to ensure workloads are smoothly processed and digital assets better protected.  Read Mark’s best practices below.  _____________________________________________________________________________ As we look at the issues data centers will face in 2019, it’s clear that it’s not all about power consumption. There is an increasing…

Posted in Uncategorized | Tagged | Leave a comment

New Relic Delivers Industry’s First Observability Platform That Is Open, Connected and Programmable, Enabling Companies to Create More Perfect Software

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

New Relic Announces Management Changes – Michael Christenson Will Join as President, Chief Operating Officer

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

Asset Management Intelligence; What You Don’t Know, Will Cost You

Have you ever made an inquiry about the status of a server or application to your IT staff and received a less than a comforting reply?  Clearly, this is not the zone you want to find yourself in. A lack of in-depth knowledge regarding what is connected, who has access and which software version is running on the network will lead to security breaches, unnecessarily high operating cost and vendors taking advantage of you.

Unfortunately, this scenario is commonplace. To help find an answer, Data Center Knowledge turns to Nlyte’s Chief Marketing Officer, Mark Gaydos, for some real-world advice to give to readers.

As originally posted in Data Center Knowledge, read below, how to arm your IT staff with the visibility needed to truly understand what is happening–anywhere–on the network.

Asset Management Intelligence; What you don’t know, will cost you.

“What do you know about your computing infrastructure?”

It’s a question that is often asked but seldom replied to in a sufficient manner by IT staff. In fact, most replies seem to contain the phrase, “we think.” When it comes to important questions on receiving information from directory services or contract sizes, “we think,” will not cut it. In order to properly answer these questions, IT needs the ability to gather data from a large host of assets and more importantly—consolidate and reconcile the data so it all makes sense for human resources, financial, legal and other stakeholders within the company that requires current information.

Collecting data is only the first step. Once the necessary data is in one system, asset intelligence can be leveraged to serve a number of vital needs. For example, achieving a greater understanding of a company’s vendors in order to negotiate with confidence. Asset intelligence uncovers all the hardware and software details to enable confidence, such as empowering a purchasing department to negotiate more favorable prices from contracted vendors.

Utilizing asset management intelligence is not only about vendor negotiations, but it also has many other practical applications because the data is always fresh—it’s the single-source-of-truth to an ever-changing inventory. Imagine having information on who is using what software, on what desktops and when the last patch was applied at your fingertips. For example, if there is a Microsoft fix or a big Intel patch for a CPU problem, how do you know how many systems actually received the patch?

Then there is attempting to control maintenance renewals. Do you know what you’re renewing or are you just signing a check? Applying asset management techniques, you can reconcile the necessary data and cross-reference it with contractual obligations to determine if a vendor is charging you too much.

The fact is: IT has become responsible for finding things nobody knows about and older, legacy asset management systems are of little use because they don’t have these advanced capabilities. Change management applications can help to an extent, but 3% to 4% of technology assets are fantom, which means, that nobody knows about them. Yet, they are still connected to and running on the network—change management applications will never see them.

Technology Asset Management Solutions provide the Single-Source-of-Truth

By contrast, today’s asset management solutions go far beyond change management capabilities. They offer granular information such as the ability to geo-locate a device, identify a subnet in prefixing and link it back to the core system. This is important because fresh data is now required by all parts of the organization; service management wants to know the configuration of the hardware in the operating systems; facilities need to know how many servers there are to properly power the racks, as well as the mergers & acquisitions that need data to know exactly what they’re acquiring. IT managers who are savvy enough to deploy a technology asset management system (TAM) will have that single-source-of-truth to better manage security, compliance, and software licensing.

Additionally, a full understanding of the devices and configurations responsible for processing workloads across this diverse IT ecosystem will help applications run smoothly. IT managers need a TAM solution to remove many challenges that inhibit a deep dive into the full IT ecosystem because today, good infrastructure management is no longer only about the cabling and devices neatly stacked within the racks. Now, data center managers need to grasp how a fractured infrastructure, spread across physical and virtual environments, is still a unified entity that impacts all workloads and application performance.

Conclusion

One of the biggest problems vexing IT is: you can’t manage what you don’t know about.

And now, the IoT economy is exacerbating the device and application problem because

everything is literally connected to the network, water metering, home oil tank gauges, and even rodent traps. It’s not uncommon to see a hundred-fifty to two hundred devices on the subnets because everything is now connected. If you don’t have visibility nor inventory of these devices, you will have problems. Without this information, items are invisible on your network and hackers who are wise to this will exploit these vulnerabilities.

The situation is similar to installing a security system in your house. If you don’t know how many doors and how many windows you have—you are vulnerable. TAM solutions solve the vulnerability issue with visibility as well as turning isolated data into actionable information vital for many other operations.

The post Asset Management Intelligence; What You Don’t Know, Will Cost You appeared first on Nlyte.

Source: Nlyte
Have you ever made an inquiry about the status of a server or application to your IT staff and received a less than a comforting reply?  Clearly, this is not the zone you want to find yourself in. A lack of in-depth knowledge regarding what is connected, who has access and which software version is running on the network will lead to security breaches, unnecessarily high operating cost and vendors taking advantage of you. Unfortunately, this scenario is commonplace. To help find an answer, Data Center Knowledge turns to Nlyte’s Chief Marketing Officer, Mark Gaydos, for some real-world advice to give to readers. As originally posted in Data Center Knowledge, read below, how to arm your IT staff with the visibility needed to truly understand what is happening–anywhere–on the network. Asset Management Intelligence; What you don’t know, will cost you. “What do you know about your computing infrastructure?” It’s a question…

Posted in Uncategorized | Tagged | Leave a comment

How to Read a Psychrometric Chart

Raise your hand if you know how to read and interpret a Psychrometric Chart.

An esteemed colleague in the DCIM space recently published a blog posting on the above aforementioned subject.

For everyone who just raised his or her hand, Hurray for you!  And I apologize if the following blog seems disrespectful to you; it is not intended to be.

A Visual  Representation

Now for those of you who didn’t raise your hand a Psychrometric Chart is something that looks like an American Indian Dream Catcher, a beautifully woven string pattern used to capture and preserve your dreams.  It is a “visual” representation of thermal dynamic properties using data points consisting of things like:

  • Dry bulb temperature
  • Vapor pressure
  • Dew point
  • Humidity ratio
  • Enthalpy
  • Saturation temperature
  • Wet bulb temperature
  • Specific volume of dry air
  • Relative humidity

Digging Out My Old College Physics Books

Now I was a physics major in college, and in my early career did thermal consulting for companies seeking NEBS compliance on their telecommunications equipment.  Yet, I have to admit when I saw this blog subject I had to dig out my old college physics books, which were buried under my slide rule and HP12c calculator.  I began to wonder, If this is so critical to managing the thermal dynamics of a data center, why don’t we see more of these charts in our DCIM software solutions?

Seriously, this is why we invented computers.  I mean, it is a nerdy fun thing to use a sling version of the psychrometer to get two different temp readings from a wet bulb and a dry bulb to calculate the humidity (and therefore dewpoint, etc.).  But then, what do you do with that information when you get it?  I guess one thing you can do is to predict PUE using a three level perceptron neural network much like Google has done…feeling the nerd-burn?

So the Question is, “OK Great, What Do You Do With the Ability to Read This?”

In terms of an A/C system it becomes important so you can optimize cost as these factors rapidly change, or if you can predict them changing in the near future.  This gets into host cooling and dehumidification work, but to put it incorrectly but simply: it is far cheaper to over cool a space and keep the doors closed if it is going to get hot an humid outside, than it is to wait to cool both the room and humidified air from the inside space.  The real use of being able to interpret this data (“read the chart”) is to plan what to do when the environmental and predicted environmental factors change so that you’re A/C units can work optimally.

You know, thinking about this has much more relevance to legal cannabis growing than it does to data centers, as dew point and factors make a big difference in indoor crop yield.  Or, any other organic indoor growing I guess.  If we could just legalize cannabis, then we have a huge play in the indoor grow space in terms of power efficiency, and potentially space management with workflow.  Running a hydroponic farm isn’t all that different from running a datacenter; it is just that moves/add/deletes are on a more scheduled basis.  The problem is, for now, that they are a cash only business, which is too hard to work with.  Those guys would love to be able to instrument their environmentals using Nlyte.

We are currently working on “Enhancing” this research using the data we have available to us, and channeling it through our Machine Learning AI engine.  But for now, I will let the computers do the math, and safely tuck my slide rule and HP12c back into the box of musty old books.

The post How to Read a Psychrometric Chart appeared first on Nlyte.

Source: Nlyte
Raise your hand if you know how to read and interpret a Psychrometric Chart. An esteemed colleague in the DCIM space recently published a blog posting on the above aforementioned subject. For everyone who just raised his or her hand, Hurray for you!  And I apologize if the following blog seems disrespectful to you; it is not intended to be. A Visual  Representation Now for those of you who didn’t raise your hand a Psychrometric Chart is something that looks like an American Indian Dream Catcher, a beautifully woven string pattern used to capture and preserve your dreams.  It is a “visual” representation of thermal dynamic properties using data points consisting of things like: Dry bulb temperature Vapor pressure Dew point Humidity ratio Enthalpy Saturation temperature Wet bulb temperature Specific volume of dry air Relative humidity Digging Out My Old College Physics Books Now I was a physics major in college, and…

Posted in Uncategorized | Tagged | Leave a comment

New Relic Helps Komatsu Optimize Digital Customer Experience for Company’s Modern Construction Initiatives

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

New Relic Rated Highest in 2019 Gartner Peer Insights ‘Voice of the Customer’: Application Performance Monitoring

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

New Relic Announces First Quarter Fiscal Year 2020 Results

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

New Relic Announces Date of First Quarter Fiscal Year 2020 Financial Results Conference Call

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment