New Relic to Host Investor Day on December 12, 2019

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

Total Economic Impact Study of the New Relic Platform for Cloud Migration and Optimization Shows Over 90% Savings in Deployment of Applications to the Cloud and 50% Reduction of Cloud Cost

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

Dmitri Chen Joins New Relic as EVP and General Manager, Asia-Pacific and Japan

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

Pioneer Corporation Leverages the New Relic Observability Platform to Scale with Rapid Business Growth

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

What it means to be a 'smart city' in 2019

As originally posted by NASCIO Community, Andrew Ryan, Vice President of Federal Accounts, Nlyte Software, takes a look at the roles states and local governments play for smart cities. Billions of dollars are now being poured into connecting devices that promise to modernize the grid and improve transportation. Connecting all these devices also means creating new micro or edge networks—that must be managed like any other data center. This involves the use of DCIM and TAM-related tools.

Read this blog to find out how much of our tax dollars are at stake and what needs to happen in order to ensure IoT-fueled smart cities deliver on their promises.

What it means to be a ‘smart city’ in 2019

According to GovTech Navigator’s 2019 State and Local Annual IT Spending, local organizations spent $51B and state organizations spent $46B in 2015. In 2019, these numbers are anticipated to climb to $53.1B and $54.5B, respectively with health and human services and education each taking the lion’s share of IT spending at $28B each, spending on transportation services came in second with $13B allocated. At 9.3% of annual U.S. GDP,  state and local governments account for the fourth largest segment, under manufacturing (17.1%), information (13.3%) and retail at 9.6%.

Just over a $100B annual spend on IT-related products gives any agency a much-needed modernization boost to services that help govern citizens’ lives. Case-in-point, although not state-funded, everyone’s lives were forever altered by the Department of Defense’s support of the Advanced Research Projects Agency Network (ARPANET) in the 1960s, which created what is commonly known as the internet.

State and Local Take IT Innovation Mantle

Today, state and local governments are poised to capture the life-altering, innovation mantle from the federal government with their support and rollout of Internet of Things (IoT) services. According to a recent study by IDC, IoT spending in the United States is expected to reach $194 billion this year, among the biggest spending areas are transportation at $71 billion and utilities at $61 billion. As federal funding transformed ARPANET into the internet—state and local IT spending will transform IoT into smart cities.

GovTech Navigator places the 2019 State and Local Government Utilities IT spend at $9.1B. The top drivers for IT spending are creating better customer experiences and modernizing the grid. IDC confirms this by pinpointing utilities’ IoT spending on smart grids for electricity, gas, and water.  When it comes to transportation, GovTech says state and local government IT spending will be $13B, citing continued investment in smart communities and preparation for connected vehicles. Small pockets of city device connectivity will quickly escalate into a uniformed mesh of data used to enhance our daily experiences. Sound familiar?  History is repeating itself, as smart cities follow a similar ARPANET adoption pattern—this time state and local CIOs are driving the innovation.

IoT and the Call-to-Sprawl

There have been many devices invented to measure time, dating back to 1500 BC with the Egyptians’ T-square measuring shadows to Sir Sandford Fleming the Canadian railway construction engineer credited with developing the system of standard time, still in use today. And then there is internet time, a clock that runs forward at a blistering pace, rendering cutting-edge innovations obsolete in months. State and local CIOs must keep internet time in mind when selecting hardware and software to build out smart city services, scalability and longevity are the keys to slowing the clock.

When purchasing components to construct smart cities, IDC puts hardware spending at $250 billion led by more than $200 billion in module/sensor purchases and IoT software spending will total $154 billion in 2019. CIOs need to carefully take into account sensor battery life, signal distance and companies who already have IoT LoRaWAN-type networks in place. Partnering with turnkey, best of breed IoT companies will simplify and speed service adoption.

As the demand for smart city services continues to grow, data centers will stretch closer and closer to the point of data origination. These microdata centers will be tucked into various corners throughout municipalities, putting a strain on device management and IT security. CIOs should strongly consider Technology Asset Management (TAM) tools to help keep track of sprawl as they continue to make a significant impact on improving and optimizing current IT operations in the world’s largest and, now, smallest data centers.

In summary, state and local CIOs will be the stewards of the new internet—an internet of connected devices throughout our cities that improve civilian services. To fuel this growth, network modernization and improvements are on every agency’s agenda and there are billions of dollars allocated to make it happen. A precedent has been set with the formation of the ARPANET, CIOs need to adhere to standards and best practices when making hardware and software selections to construct their citywide IoT networks. The next stage of data-infused evolution is here, this time it does not reside with the federal government—it’s in the hands of state and local CIOs.

The post What it means to be a 'smart city' in 2019 appeared first on Nlyte.

Source: Nlyte
As originally posted by NASCIO Community, Andrew Ryan, Vice President of Federal Accounts, Nlyte Software, takes a look at the roles states and local governments play for smart cities. Billions of dollars are now being poured into connecting devices that promise to modernize the grid and improve transportation. Connecting all these devices also means creating new micro or edge networks—that must be managed like any other data center. This involves the use of DCIM and TAM-related tools. Read this blog to find out how much of our tax dollars are at stake and what needs to happen in order to ensure IoT-fueled smart cities deliver on their promises. What it means to be a ‘smart city’ in 2019 According to GovTech Navigator’s 2019 State and Local Annual IT Spending, local organizations spent $51B and state organizations spent $46B in 2015. In 2019, these numbers are anticipated to climb to $53.1B…

Posted in Uncategorized | Tagged | Leave a comment

What is the Difference Between CMDB and Asset Management?

As originally posted in IT Toolbox, Mark Gaydos, Chief Marketing Officer at Nlyte Software, dispels the confusion between a CMDB and ITAM. Our industry is full of acronyms and these two terms are often misused. Their similar traits reside in the fact that each one is a repository of valuable information, used to improve business services.  However, it’s the type of information and what that data is used for, which makes each one unique.

This is a “must-read” for anyone looking to draw the line between CMBD and ITAM solutions. Because each coincides to properly track contracts, devices and help ensure IT assets are better protected, but one is a pure business play and the other more valuable to IT-related endeavors.

Do you know the difference?

Read this blog to see if you’re correct or to learn how different they really are.

____________________________________________________________________________________________

There is a wealth of valuable data, neatly contained in system warehouses, with labels such as Configuration Management Database (CMDB) and Information Technology Asset Management (ITAM). These are not redundant repositories. Each system holds individual “truths” with a usefulness that is specific to business-related endeavors or IT services. Knowing the differences between the two will ease business processes as well as support the IT infrastructure with an added level of security. Each has a home in any mid to large size organization.

CMDB and ITAM-Dependance or Mutually Exclusive?

When it comes to managing IT assets, there’s no shortage of solution acronyms available.  Perhaps the ones that cause the most confusion are Configuration Management Database (CMDB) and Information Technology Asset Management (ITAM). Both solutions provide visibility into IT assets to help organizations better optimize and manage business services. So, what is the difference between one acronym and the other?

One of the biggest differences resides in the fact that the CMDB data warehouse contains information typically only applicable to IT needs. Whereas the ITAM solution contains information valuable to many other departments within a company. Each holds individual and similar values to an organization, it all depends on what the particular needs are and what information is permanent to making valuable decisions. You just need to decide what repository to dip your ladle into.

The Difference Between CMBD and ITAM

A CMDB is a repository for managing and maintaining Configuration Items (CIs). Think of CI’s as items that are used as part of an Information Technology Infrastructure Library (ITIL), with an added value for IT Service Management (ITSM) needs—just to bring a few more acronyms into play. Information in the CMDB database is typically only important for IT needs and contains a good deal of change and configuration management details, e.g., CIs. It helps to think of the CI as an architecture that must be constantly maintained in order to be a valuable contributor to IT services. Its value resides within the core of an ITSM solution and only includes assets relevant to that solution’s needs. It’s a flat database that receives information but does not actively gather relational hardware or software data—placing the information in danger of being outdated quickly if not maintained regularly.

On the other hand, ITAM is more valuable as an overall business service, as it contains information on devices connected to the network and is valuable to non-IT-related departments. Think of the ITAM solution as an IT repository of real-time network-connected asset details, ready to be quarried into reports for finance, legal, compliance, and many other business stakeholders. It is also important to note that an ITAM solution can synchronize relevant information with CMBDs, but maintains its own separate data repository for business-related needs. ITAM solutions also avoid the danger of containing stale data because they continually reach out across the network to automatically gather the most recent information.

People and Processes

If organizations are struggling over which solution provides the greatest value, the answer is actually obtained by asking a question: Who is looking for what type of information? If you’re in finance, human resources, legal or even IT security, you have a need for continuous, real-time information that can be obtained through an ITAM solution, pinging every device connected to the network. By contrast, if you need IP information (laptops) and application dependency details, then a CMDB is what you’re looking for.

The two solutions are not mutually exclusive and bring their own individual value to different parts of the organization. The common features between them are licensing data, hardware inventory details and software information. But the CMDB solution takes it one step further by adding subnet as well as RESTful API detail—stuff that makes the business stakeholder’s eyes glaze over.

Create a Clear Strategy

Most modern business services start out as best IT practices, and an organization’s reliance on relevant data to make decisions has never been greater. Companies need a clear strategy to collect and organize data for tactical purposes—both the CMDB and ITAM solutions are applicable allies for service strategies.

It also helps to adopt the realization that everything in an organization has its own individual life-cycle and this includes IT assets as well as employees. Both areas are understandably intertwined and dependent on one another to complete assigned tasks. There are many tools available to guide each one of them to enhance output and value. The data from CMDB and ITAM solutions have become just as important to making informed business decisions as SAP or Salesforce reports. Both solutions should coincide to properly track contracts, devices and help ensure IT assets are better protected from disruptive internal and external influences.

The post What is the Difference Between CMDB and Asset Management? appeared first on Nlyte.

Source: Nlyte
As originally posted in IT Toolbox, Mark Gaydos, Chief Marketing Officer at Nlyte Software, dispels the confusion between a CMDB and ITAM. Our industry is full of acronyms and these two terms are often misused. Their similar traits reside in the fact that each one is a repository of valuable information, used to improve business services.  However, it’s the type of information and what that data is used for, which makes each one unique. This is a “must-read” for anyone looking to draw the line between CMBD and ITAM solutions. Because each coincides to properly track contracts, devices and help ensure IT assets are better protected, but one is a pure business play and the other more valuable to IT-related endeavors. Do you know the difference? Read this blog to see if you’re correct or to learn how different they really are. ____________________________________________________________________________________________ There is a wealth of valuable data,…

Posted in Uncategorized | Tagged | Leave a comment

New Relic Announces Second Quarter Fiscal Year 2020 Results

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

Hybridization of the Enterprise Compute Infrastructure

The State of Hybrid Infrastructure

From pets, plants in our garden, to the technology landscape, everything is getting hybridized. In the Information Technology world, the hybrid cloud (or compute infrastructure if you wish) is defined as interconnected on-premise, shared, and cloud resources, simultaneously using collective resources. As noted in a survey “Voice of the Enterprise” conducted by the 451 Research Group, nearly 60% of organizations indicate that they are establishing a hybrid cloud environment. This means they are leveraging existing data center assets (traditional and private cloud), colocation, and cloud resources to meet various performance requirements. What seems a bit surprising, based on anecdotal conversations, only 18% of the participants in the survey indicated that the public cloud would become their soul computing platform.

Hybrid Cloud investment predictions for 2019 according to 451 Research Group

• Enterprise data centers show 2.5% growth rate, which is down from 6% two years ago
• Colocation and Managed Service facilities are increasing 8.1%, surpassing enterprise owned assets
• Cloud infrastructure is growing by 18% which is expected, but showing signs of slower growth

Tracking data center growth and density the industry tracking sight datacentermap.com gives a great visualization of private, shared, and public cloud-based data centers.

Drivers and Demands for the Hybrid Cloud

• Edge computing demand is driven by low latency/high bandwidth data demands from autonomous vehicles, on-line transactions, to IoT devices
• Cost management of applications where IT organizations are optimizing workloads based on infrastructure and networking costs, risk management, and performance demands
• Performance factors drive workload placement – distributed computing for high transactional requirements; traditional centralized infrastructure for compute intense batch reporting and analytics
• Security and compliance drive workload placement. Traditionally, it was believed that an organization’s own data center offered more security and auditability. However, as these facilities age, colocation and public cloud vendors are adopting modern and more sophisticated systems.

Trends
Colocation services have evolved over the years from the fortress style data centers to a highly connected internet exchange hub. We are now seeing that further evolve to host more complex physical and virtual infrastructure capable of supporting the workload mobility demanded by the Hybrid Cloud and Edge computing needs.

With this maturing or evolution competition among Colocation providers is increasing. They now need to add unique services to entice and maintain their tenants. This competition is good for both the customer and the colocation providers themselves. Enhanced services bring significant value to the tenant without paying for the high cost of innovation. Improved services help bring operating costs down while driving more customer demand.

Colocation vendors are starting to provide more services and transparency to support the tenant’s workload management. To manage workloads in this complex hybrid environment, organizations need visibility into the entire stack, where is it, where does it run best, where it is the best security profile, where does it cost least to run.

Complex environments have millions of inputs from devices and sensors daily. It is impossible to manually monitor and react to the valuable data that is available. Artificial Intelligence (AI) and machine learning are being adopted to provide advanced to help innumerate all of the outcomes for smarter decision making and improve failure predictability.

The demand to optimize and tier workloads are demanding more in-depth management tools to assist. DCIM tools are proving the ideal solution for both tenants and the colocation provider to understand the nature of a workload and identify where it should move to, based on the parameters of cost, performance, and risk.

7 DCIM Considerations When Choosing a Colocation Provider

1. Is the location easily accessible to servicing vendors and employee access, as well as are there costs and risks driven by environmental conditions?
2. Are the power supply and costs reliable?
3. Is there appropriate cooling and PUE management to keep costs consistent and predictable while reducing overheating risks?
4. Does the host use Data Center Infrastructure Management tools, and can you interface with them to monitor and manage your environment?
5. Is there adequate physical security and reliable audit trail documentation?
6. Is there a comprehensive and integrated workload and workflow system in place?
7. Is the Colocation vendor’s Service Level Agreement in alignment with the tier requirements of the applications placed there, and are they transparent regarding SLA performance?

Challenges
The complexity of technology and the hybridization of the compute infrastructure is requiring new skill sets and a broader depth of talent that organizations are challenged to find. These factors are driving the need for automation, as found in DCIM, and a resurgence in engagement with integrators such as IBM, Atos, and Accenture.

Quality of connectivity is critical more so now than ever. Organizations need to know latency, jitter, and packet loss to ensure predictable quality data transactions. To achieve quality network connectivity, a dedicated and secure connection not typically found in an IP connection is required. These connections can be very costly and time-consuming to manage. Colocation providers can provide these and grantee them with their SLAs.

The growing diverse footprint of the hybrid cloud, demand consolidated visibility for capacity planning and workload placement and optimization. DCIM helps provide the visibility to manage these workloads and understands them to automate the workflows needed to manage them across the hybrid environment.

New technology, available skills, and lacking DevOp tools challenge the move to hybrid cloud, adding complexity to capacity management. There are more and more computing platforms demanding the metrics to know where to place a workload. More and more applications are getting placed in nonoptimal platforms. There is a need to know resource availability and how much subscription to commit to for maximum optimization of workloads.

What To Do Now
When selecting a colocation partner, first understand the 7 Key Best Practices. View them as an extension of your own compute infrastructure. Make sure they provide tools giving you visibility, management, and validation of SLA of your real estate. Ideally find a Colocation partner that provides a portal that allows you to manage your environment and integrate with your own DCIM solution.

The post Hybridization of the Enterprise Compute Infrastructure appeared first on Nlyte.

Source: Nlyte
The State of Hybrid Infrastructure From pets, plants in our garden, to the technology landscape, everything is getting hybridized. In the Information Technology world, the hybrid cloud (or compute infrastructure if you wish) is defined as interconnected on-premise, shared, and cloud resources, simultaneously using collective resources. As noted in a survey “Voice of the Enterprise” conducted by the 451 Research Group, nearly 60% of organizations indicate that they are establishing a hybrid cloud environment. This means they are leveraging existing data center assets (traditional and private cloud), colocation, and cloud resources to meet various performance requirements. What seems a bit surprising, based on anecdotal conversations, only 18% of the participants in the survey indicated that the public cloud would become their soul computing platform. Hybrid Cloud investment predictions for 2019 according to 451 Research Group • Enterprise data centers show 2.5% growth rate, which is down from 6% two years…

Posted in Uncategorized | Tagged | Leave a comment

Global Survey Reveals Key Challenges and Technologies Expected to Drive the Next Phase of Digital Transformation

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment

Gregory Ouillon Joins New Relic as EMEA Field Chief Technology Officer

Source: New Relic

Posted in Uncategorized | Tagged | Leave a comment