Skip to Content

IT Managed Services Provider Resource Recommendation Update on March 31, 2021

Knowledge of terms to know

What is Machine Learning Operations (MLOps)?

Machine learning operations (MLOPs) is a set of practices that combines developing and maintaining machine learning (ML) seamlessly. The goal is to establish reliable communication and collaboration between data scientists and machine learning operations professionals in order to properly manage and shorten an artificial intelligence (AI) product’s lifecycle. The three main components of MLOps are machine learning, DevOps (IT), and data engineering.

It achieves that by implementing automation as often as possible, settling on the balance between improving product quality and meeting business and market requirements.

MLOps works on the same principles that run DevOps. In addition to the software developers (Devs) and IT operations teams, MLOps includes data scientists and ML engineers. The result is a continuous production loop that starts with collecting data and modeling in the MLOps segment.

The workflow then proceeds to the devs, where they handle the product’s verification and packaging before sending it to the IT operations team to release, configure, and monitor the result. The loop continues as the feedback is used to plan and create a new update to the machine, going back to data experts.

MLOps is able to produce noticeable results because it bridges the gap between data scientists and ML engineers, and devs and IT teams. MLOps was developed with the knowledge that not all data scientists and ML engineers are experienced in programming languages and IT operations. But instead of older models, where every section in ML development is independent, MLOps creates a continuous feedback loop between the three departments, enabling a faster development cycle and higher product quality, all whilst allowing professionals to focus solely on what they know best instead of having to learn skills on the opposite end of the spectrum.

What is Internet Streaming Media Alliance (ISMA)?

The Internet Streaming Media Alliance (ISMA) was a nonprofit corporation operating in the early 21st century to help standardize and explore open standards for streaming media. ISMA was a corporate collective, with participants such as Apple, Cisco and Sun Microsystems.

In conjunction with other groups, the Internet Streaming Media Alliance helped to develop standards for different audio and video formats, including MPEG formats, over IP networks.

Creating specifications, doing testing and communicating with members, ISMA helped to form some of the common practices for broadcasting over the Internet.

In 2010, Reuters reported that ISMA was merging with MPEG Industry Forum or MPEGIF, another party involved in handling streaming video standards. MPEGIF describes itself as a forum to exchange information and a party in the production of universal media formats and compression standards.

What is Stream Control Transmission Protocol Endpoint (SCTP)?

The Stream Control Transmission Protocol (SCTP) endpoint is an SCTP designated packet sender or receiver with eligible combined and unique transport addresses. SCTP endpoint addresses may not be used by other SCTP endpoints.

SCTP enables single endpoint capability for multiple Internet Protocol (IP) addresses known as multi-homing. This provides better data survivability during network failure.

SCTP multi-homing is only used for redundancy.

During SCTP Association initialization, SCTP endpoint messages are received. Operating systems (OS) may generate random source addresses resulting in mixed messages from different sources. SCTP endpoint addresses may use multiple IP addresses but must share the same port number.

To ensure security, reply messages are always sent to message initiators. For example, when a server receives a client’s SCTP Association initiation, the server always acknowledges the client’s IP address.

What is CCNA (Cisco Certified Network Associate)?

CCNA (Cisco Certified Network Associate) is a category of technical certifications offered by Cisco for early-career networking professionals. The CCNA is the second level of accreditation, one step above Cisco Certified Entry Networking Technician (CCENT) and directly below the CCNP (Cisco Certified Network Professional). Cisco offers five Cisco Career Certification programs and levels of accreditation: Entry, Associate, Professional, Expert and Architect.
Cisco redesigned the CCNA program in 2013 to offer the certification in various subspecialties related to networking. For example, the CCNA Cloud certification focuses on the skills required for cloud networking, while the CCNA Wireless certification validates an individual’s competence in wireless local area networks (WLANs).

CCNA certificates are available in the following ten areas: cloud, collaboration, cybersecurity operations, data center, design, industrial/IoT, routing and switching, security, service provider and wireless.

The CCNA routing and switching category is the most similar to the pre-2013 CCNA program. A CCNA routing and switching certification covers the fundamentals of enterprise networking, including LAN switching, IP addressing, routing, subnetting and more. It assesses an individual’s ability to deploy, configure, manage and troubleshoot enterprise networks. In 2016, Cisco updated the CCNA routing and switching certification to place more emphasis on software-defined networking (SDN), network-based analytics and network functions virtualization (NFV).

“Cisco didn’t invent IT certification exams, but it did perfect them in a way only Novell had done at a similar level before.” – Teren Bryson

Related Terms: Cisco Certified Entry Networking Technician, Cisco Certified Network Professional, wireless LAN, software-defined networking, network functions virtualization

What is Microsoft Technology Associate (MTA) Certification?

Microsoft Technology Associate (MTA) certification is the name of a suite of entry-level certifications offered by Microsoft that signify fundamental technology knowledge in those who earn it.

According to Microsoft, earning the certificate(s) provides the core knowledge needed to begin a career in technology. The certification course consists of a number of prep resources and exams. Passing one of the exams earns one MTA credit.

MTA is a recent installment in Microsoft Learn, which is a collection of learning paths, exams and other resources that include over 240 different certifications. MTA is unique in that it is for beginners looking to enter a career in technology, whereas many of the other certifications are more specialized or role-focused. MTA is designed for IT generalists and students.

The MTA exams assume that exam-takers have some prior hands-on experience or training but does not assume that the takers have technology job experience.

“The MTA exams have minimal prerequisites and, therefore, gives those with limited information technology experience an opportunity to be credentialed.” – Ben Lutkevich

Related Terms: object-oriented programming, Microsoft Visual Basic, Microsoft SQL Server, Active Directory, Software Engineer, Database Administrator

What is Stream Control Transmission Protocol (SCTP)?

The Stream Control Transmission Protocol (SCTP) association is an SCTP endpoint uniquely identified by transport addresses. Only one SCTP association occurs between two endpoints at a time.

The SCTP protocol is specified by RFC 4960, which updates RFC 2960 and RFC 3309.

What is Contactless Payment?

A contactless payment is a wireless financial transaction in which the customer authorizes monetary compensation for a purchase by moving a security token in close proximity to the vendor’s point of sale (PoS) reader. Popular security tokens for contactless payment include chip-enabled bank cards and smartphone digital wallet apps. Contactless payments may also be referred to as touch-free, tap-and-go or proximity payments. When goods or services are purchased through a contactless payment, the process may then be referred to as a frictionless checkout.

Contactless payments are known for being secure because the customer does not share billing or payment information directly with the vendor. Instead, all communication is encrypted and each purchase is tokenized with a one-time transaction number. Should a wireless transmission be intercepted, the only information the attacker will get is the one-time code that was used to identify a particular transaction.

The adoption of contactless payment has been accelerated by COVID-19 and a desire for consumers to avoid person-to-person contact when making in-store purchases. The U.S. Payments Forum and EMV (Europay, Mastercard and Visa) are responsible for setting the technical standards for smart payment cards, as well as for the PoS readers that accept them.

“According to data from Barclaycard, contactless payments accounted for 88.6% of total card payments in 2020 as restrictions on contact-based payments drove people to contactless.” – Karl Flinders

Related Terms: Apple Pay, Amazon Go, tokenization, smart card, EMV card, Near-Field Communication

What is Real Time Streaming Protocol (RTSP)?

Real Time Streaming Protocol (RTSP) is a protocol which provides framework for real time media data transfer at the application level. The protocol focuses on connecting and controlling the multi data delivery sessions on lines of time synchronization for continuous media like video and audio. In short, real time streaming protocol acts as a network remote control for real time media files and multimedia servers.

Real Time Streaming Protocol is also known as RFC 2326.

Taking advantage of the streaming process, real time streaming protocol is based on the bandwidth available between the source and destination and breaks the large data into packet sized ones. This allows for the client software to play one packet, while decompressing the second packet and downloading the third. Users would listen / see the media files without feeling a break between the data files. Some of the features of the real time streaming protocol are similar to IPV6.

What is Citizen development?

Citizen development is a business trend that encourages employees to experiment with low code-no code (LCNC) software development platforms. The citizen development movement encourages line of business (LOB) employees to become comfortable with the logic that programmers use to build software. LCNC platforms use visual symbols to represent blocks of code; non-technical programmers can drag and drop blocks into a flowchart and add actions to create applications.

Citizen development empowers business users to create new applications or software features in days — or even hours — by using an approved LCNC development environment selected and managed by the organization’s information technology (IT) department. This approach not only speeds innovation, it also makes the application development process more efficient while helping to eliminate some of the security problems associated with shadow IT and employee use of third-party apps.

The growing popularity of low- and no-code platforms is being driven by multiple factors, including a lack of skilled software developers, the need to improve turnaround time for development projects and a desire by line of business (LOB) employees to be as agile as possible in a competitive marketplace.

“All that a low-code user really needs is a clear understanding of proper business workflows. So, essentially, a business can boost development productivity from a broader group of everyday employees without the need to hire more — or, at least, as many — developers.” – Stephen J. Bigelow

Related Terms: LCNC, shadow IT, line of business, application platform as a service, rapid application development

What is Regional Broadband Global Area Network (RBGAN)?

Regional broadband global area network (RBGAN) is a now-defunct IP-based, shared carrier service that was offered on a regional basis. The service was provided by network mobile satellite company Inmarsat, but was withdrawn in 2008 and replaced by the broadband global area network (BGAN), a global satellite Internet network.

Inmarsat is a British company that provides telephony and data services to users worldwide. The company has developed a series of networks since it was founded in 1979. The withdrawal of RBGAN occurred because it was superseded by the newer BGAN technology.

BGAN signal acquisition requires line-of-site with the geostationary satellite and requires the user to have a compass and a general idea of the satellite’s location. Slowly turning the terminal will soon indicate signal capture, which can be done in less than a minute for an experienced user with a good signal.

Some limitations include prohibited use on the open ocean in a moving vessel, although Inmarsat does provide broadband service for maritime communications using all 14 satellites. Regular terminals also cannot be used on aircraft.

What is Broadband Wireless Access (WiBro)?

Broadband wireless access (Wireless broadband, or WiBro) refers to inherent network mobility in a geographical area and managed mobility between fixed networks. Broadband wireless access facilitates and ensures mobile device connectivity and communication.

Broadband wireless access is also known as wireless local loop (WLL), fixed-radio access (FRA), fixed wireless access (FWA), radio in the loop (RITL) and metro wireless (MW).

Broadband wireless access ensures full signal coverage and functions, including registration, routing, forwarding, and intersystem communication. Connected wireless terminals or base stations that remain in the same antenna beam have transport capability.

Managed mobility is nonexistent in fixed wireless networks. Thus, there are issues with registration, call routing, call forwarding, and intersystem communication.

What is Scrum Master?

A Scrum Master is a facilitator for an Agile development team. They are responsible for managing the exchange of information between team members. Scrum is a project management framework that enables a team to communicate and self-organize to make changes quickly, in accordance with Agile principles.

Although the scrum analogy was first applied to manufacturing in a paper by Hirotaka Takeuchi and Ikujiro Nonaka, the approach is often used in Agile software development and other types of project management. The term comes from the sport rugby, where opposing teams huddle together during a scrum to restart the game. In product development, team members huddle together each morning for a stand-up meeting where they review progress and essentially restart the project.

A Scrum Master leads a scrum. Scrums are daily meetings conducted by Agile, self-organizing teams that allow the team to convene, share progress and plan for the work ahead. Some teams have a fixed Scrum Master, while others alternate the role with various team members occupying the position on different days. No one approach is right, and teams can choose to appoint the Scrum Master role as best fits their needs.

During the daily meetings, the Scrum Master asks the team members three questions:

  • What did you do yesterday?
  • What will you do today?
  • Are there any impediments in your way?

The Scrum Master then uses the answers to those questions to inform tactical changes to the team’s process, if necessary.

“You don’t have to be a full-stack developer to become a Scrum Master, but you should be creative and quick with a whiteboard marker.” – Diane Hoffman

Related Terms: Scrum, stand-up, sprint, story, story point, Agile retrospective

What is Project Post-Mortem?

A project post-mortem, also called a project retrospective, is a process for evaluating the success (or failure) of a project’s ability to meet business goals.

A typical post-mortem meeting begins with a restatement of the project’s scope. Team members and business owners are then asked by a facilitator to share answers to the following questions:

  • What worked well for the team?
  • What did not work well for the team?

The facilitator may solicit quantitative data related to cost management or qualitative data such as perceived quality of the end product. Ideally, the feedback gathered from a project post-mortem will be used to ensure continuous improvement and improve the management of future projects.

Post-mortems are generally conducted at the end of the project process, but are also useful at the end of each stage of a multi-phase project. The term post-mortem literally means “after death.” In medicine, the term is used to describe an examination of a dead body in order to discover the cause of death.

“A sprint retrospective helps determine what went well and what needs to be improved in the future. The goal is team-wide incremental improvement.” – David Carty

Related Terms: Agile retrospective, six thinking hats retrospective, peer review, kaizen, PMI retrospective

What is Digital Video Broadcasting (DVB)?

Digital video broadcasting (DVB) is a standard for digital television and video that is used in many parts of the world. Various DVB standards cover satellite, cable and terrestrial television as well as video and audio coding for file formats like MPEG.

Digital video broadcasting may also be referred to as digital television.

Since its creation in the 1990s, digital video broadcasting has been adopted all over Europe and in many areas of Africa, Latin America and other areas of the world. A few countries, including the United States, use a different type of standard called the ATSC, which was developed by the Advanced Television Systems Committee.

Both the DVB and the ATSC represent a group of electronics and telecom companies that help to set up agreed standards for visual broadcasting through digital technology. DVB is the product of the DVB Project, which involved several hundred companies along with other parties like regulators and broadcasters. Various groups participated in the pan-European adoption of DVB standards in order to have a consistent standard for new digital video technologies and consumer or commercial services.

DVB standards cover many aspects of digital video and are implemented in different ways according to the broadcast medium and other factors. It’s worth noting that some of the standards are developed by drawing on pre-existing ISO/EIC standards. Some aspects of DVB are patented according to their general use and value.

What is Communication and Networking Riser?

A Communications and Networking Riser (CNR) is a riser card developed by Intel for the advanced technology extended (ATX) family of motherboards. It is used for specialized networking, audio and telephony equipment. When introduced, CNR offered savings to motherboard manufacturers by removing analog I/O components from the motherboard.

While CNR slots were common on Pentium 4 motherboards, they have largely been phased out in favor of on-board or embedded components.

The CNR specification is open to the industry. It was used to inexpensively integrate local area networks, modems and audio subsystems with personal computers. It supports broadband, multichannel audio, analog modem and Ethernet-based networking. CNR can also be expanded to meet the requirements of new technologies such as DSL. CNR is a scalable motherboard riser card and interface that supports audio, modem and network interfaces of core logic chipsets. CNR has the capability to minimize electrical noise interference by physically separating the noise-sensitive elements from the motherboard’s communication system.

Red Hat’s State of Enterprise Open Source 2021 Report

Reveals that 90% of IT leaders are using enterprise open source. Read the report to learn more about this year’s trends.

Google announced their sponsorship of two full-time developers to support Linux security

Two individuals full-time sponsored by Google are Gustavo Silva, whose work includes eliminating some classes of buffer overflow risks and on kernel self-protection, and Nathan Chancellor, who fixes bugs in the Clang/LLVM compilers and improves compiler warnings.

Knowledge of terms to know

What is Broadband Global Area Network (BGAN)?

Broadband global area network (BGAN) is a global satellite Internet network by satellite communication company Inmarsat. It is designed for low-cost connectivity enabled with voice and data communications. It can be accessed anywhere on the earth’s surface, excluding the poles. It uses a constellation of three geostationary satellites at a time (of the 14 in the system), called I-4, designed to communicate with lightweight, surface-based, portable terminals about the size of a laptop computer.

High-end BGAN terminals have downlink speeds of 492 Kbps and upload speeds of 300 to 400 Kbps. However, the latency of 1 to 1.5 seconds round trip for the background IP service is an issue. Streaming services are slightly faster at 800 milliseconds to 1 second. Performance-enhancing proxies, software and transmission control protocol packet accelerators are used to boost performance.

Ground-based terminals have similar capabilities but are built by several manufacturers. The most expensive calls are from cell phones, land line phones and satellite phones. But voice quality is high and this is the fastest global data link available. It is easily set up with no user restrictions, other than cost.

Signal acquisition requires line-of-site with the geostationary satellite and requires the user to have a compass and a general idea of the satellite location. Slowly turning the terminal will soon indicate signal capture, which can be done less than a minute for an experienced user with a good signal.

Some limitations include prohibited use on the open ocean in a moving vessel. Regular terminals also cannot be used on aircraft.

What is PM triple constraint?

In project management, the triple constraint is a model that describes the three most significant restrictions on any project: scope, schedule and cost.
The triple constraint is sometimes referred to as the project management triangle or the iron triangle. In the typical triangular model, scope, schedule and cost are constraints that form the sides of the triangle, with quality as the central theme. (An alternative to the triangle, the project management diamond, adds quality as the fourth side of the model and changes the central theme to customer expectations.)

The three constraints are interdependent: None of them can be altered without affecting one or both of the others. For example, if the scope of a project is increased, it is likely to take longer and/or cost more. Likewise, an earlier deadline is almost certain to either require more money or a less ambitious scope.

The difficulty of satisfying expectations for all three constraints is sometimes expressed as pick two: the concept that in any set of three desired qualities, only two can be delivered. If, for example, clients want to keep the budget low, the product is likely to take longer or be of lower quality.

“Often, higher-level managers will find lots of excuses for not making a change, usually couched in the triple constraint: The change will be too expensive, time-consuming or out of scope.” – Geri Owen

Related Terms: project management framework, project scope, IT project manager, pick two, Project Management Office

What is Online Data Storage?

Online data storage is a virtual storage approach that allows users to use the Internet to store recorded data in a remote network. This data storage method may be either a cloud service component or used with other options not requiring on-site data backup.

Online data storage is generally defined in contrast to physical data storage, where recorded data is stored on a hard disk or local drive, or, alternately, a server or device connected to a local network. Online data storage usually involves a contract with a third-party service that will accept data routed through Internet Protocol (IP).

Advantages of online data storage and similar services include data backup security and convenience. Smaller businesses and entities have networks that are unable to efficiently handle data backups or provide compliance with security standards. In such cases, a vendor that provides online data storage is often a viable solution.

What is Computer-Mediated Communication?

Computer-mediated communication (CMC) is a process in which human data interaction occurs through one or more networked telecommunication systems. A CMC interaction occurs through various types of networking technology and software, including email, Internet Relay Chat (IRC), instant messaging (IM), Usenet and mailing list servers.

CMC technology saves time and money in IT organizations by facilitating the use of all communication formats.

Computer mediated communication is divided into synchronous and asynchronous modes. In synchronous communication, all participants are online simultaneously. In asynchronous communication there are time constraints on communication messages and responses, as with emails.

Key CMC features include conversation recordability, formal communication, and user identity anonymity, depending on software type – such as IM. However, CMC user statement interpretation may be difficult due to the absence of verbal communication.

What is Communication Streaming Architecture?

Communication Streaming Architecture (CSA) is a communication interface developed by Intel that links the memory controller hub (MCH) on the chipset to the network controller. The device is an individualized connection that does not use the peripheral component interconnect (PCI) bus on the input/output (I/O) controller hub. The CSA offloads network traffic from the PCI bus and reduces bottlenecks by freeing up bandwidth for other I/O processes.

The CSA was only used for the Intel chipset that was manufactured in 2003. It was discontinued a year later and replaced by the PCI Express.

By going around the PCI bus, the CSA drastically reduces the number of bottlenecks, which can be a problem for PCI architectures. A bottleneck is when the transmission of data is delayed, impaired or completely stopped. The CSA reduces bottlenecks by offloading network traffic from the PCI bus, which frees the bandwidth for further I/O operations. In addition, other devices like the USB or optical disk drives such as DVD-ROM that are connected to the I/O controller hub (ICH) can use freed bandwidth.

What is CIA triad?

The CIA triad is a guiding principle that many organizations use to create their information security policies, processes and procedures. CIA stands for confidentiality, integrity and availability. Each component of the triad plays an important role in maintaining data integrity.

  • Confidentiality: Prevents sensitive information from reaching the wrong people, while making sure that the right people have access. In terms of confidentiality, organizations need to ensure data is secure in transit and at rest. Even if data is lost or hacked, it should not be readable.
  • Integrity: Maintains the consistency, accuracy, and trustworthiness of data over its entire life cycle. When it comes to integrity, organizations need to ensure that personal data is accurate, complete, up to date and kept only as long as necessary.
  • Availability: Ensures data can be accessed by authorized people when they need it. In terms of availability, organizations also need to understand how individuals and their work could be affected if they are not able to access the data they need.

“Almost every discussion of cyber security relates back to the confidentiality, integrity and availability (CIA) triad. ” – Biju Varghese

Related Terms You Should Know: data integrity, infosec, information assurance, NIST Cybersecurity Framework, CISA, NICE framework

What is Collaboration Platform?

A collaboration platform is a category of business software that adds broad social networking capabilities to work processes. An important goal of collaboration software is to foster innovation by incorporating knowledge management into business processes.

Vendors are taking different approaches to building collaboration platforms. Some are adding a social layer to legacy business applications, while others are embedding collaboration tools into new products. Enterprise-level collaboration platforms share certain attributes — they need to be easily accessible and easy to use, they need to be built for integration and they need to come with a common set of functions that support team collaboration, issue tracking and messaging.

According to a recent study conducted by Nemertes, 27% of study participants using team collaboration platforms had measurable productivity gains, and 13% had shorter project completion times.

“In the business world, the value of enterprise collaboration software extends well beyond document sharing. Companies are creating knowledge-sharing communities to boost employee productivity and effectiveness and bring external collaborators into the fold.” – Kara Gattine

Related Terms: knowledge management, team collaboration, cloud collaboration, contextual collaboration, collaborative BI, collaboration diagram

What is Virtual Telecommunications Access Method (VTAM)?

Virtual Telecommunications Access Method (VTAM) is an IBM application programming interface that allows application programs to communicate or exchange data with external devices such as mainframes, communications controllers, terminals, etc. VTAM helps to abstract these devices into logical units so that developers do not need to know the underlying details of the protocols used by these devices.

VTAM includes macro instructions that provide a consistent protocol for device connections and can use synchronous or asynchronous operations. This protocol enables different types of legacy programs to be connected to modern device systems.

VTAM eventually became part of IBM’s Systems Network Architecture and Systems Application Architecture.

What is Digital Divide?

The digital divide refers to the difference between people who have easy access to the Internet and those who do not. A lack of access is believed to be a disadvantage to those on the disadvantaged side of the digital divide because of the huge knowledge base that can only be found online.

The digital divide appears in a number of different contexts, including:

  • Differences between rural and urban Internet access
  • Socioeconomic differences between people of different races, income and education that affects their ability to access the Internet
  • Differences between developed, developing and emerging nations in terms of the availability of Internet

The digital divide was once used to describe different rates of technology adoption by different groups. In recent times, however, Internet access has increasingly been seen as the primary advantage that many technologies can grant in that it represents a staggering store of knowledge and resources. In this sense, the digital divide may be shrinking as cheaper mobile devices proliferate and network coverage improves worldwide.

What is Big Data?

Big data refers to a process that is used when traditional data mining and handling techniques cannot uncover the insights and meaning of the underlying data. Data that is unstructured or time sensitive or simply very large cannot be processed by relational database engines. This type of data requires a different processing approach called big data, which uses massive parallelism on readily-available hardware.

Quite simply, big data reflects the changing world we live in. The more things change, the more the changes are captured and recorded as data. Take weather as an example. For a weather forecaster, the amount of data collected around the world about local conditions is substantial. Logically, it would make sense that local environments dictate regional effects and regional effects dictate global effects, but it could well be the other way around. One way or another, this weather data reflects the attributes of big data, where real-time processing is needed for a massive amount of data, and where the large number of inputs can be machine generated, personal observations or outside forces like sun spots.

What is Procure to pay (P2P)?

Procure to pay is the process of requisitioning, purchasing, receiving, paying for and accounting for goods and services. It gets its name from the ordered sequence of procurement and financial processes, starting with the first steps of procuring a good or service to the final steps involved in paying for it.

Procure to pay is a process, not a technology, although there is software expressly designed to handle the entire procure to pay process or components of it. One of the biggest benefits of integrated procure to pay suites is their ability to consolidate data, and enable a process called intelligent spend management that executives can use to get more control over expenses.

Vendors of e-sourcing and procurement software, such as SAP Ariba and Coupa Software, have developed significant procure to pay features. A few niche players, among them Basware, BirchStreet Systems, GEP, Jaggaer, Verian and Zycus, claim to automate the entire process.

“Intelligent spend management is a forward-looking way to think about procurement. It’s about using technology that focuses its power on the tasks that can (and should) be automated or eliminated so you can focus on the aspects of business that can (and should) have human expertise.” – Chris Haydon

Related Terms You Should Know: IT procurement, due diligence, customer persona, sales funnel, request for quotation, request for proposal, statement of work, supplier relationship management, procurement software, procurement lead time, e-procurement, service level agreement

What is Bare-metal Hypervisor?

A bare-metal hypervisor, also known as a Type 1 hypervisor, is virtualization software that has been installed directly onto the computing hardware.

This type of hypervisor controls not only the hardware, but one or more guest operating systems (OSes). In comparison, a hosted hypervisor, or Type 2 hypervisor, runs within the host OS, so the underlying hardware is managed by the host OS.

Bare-metal hypervisors feature high availability and resource management; they also provide better performance, scalability and stability because of their direct access to the hardware. On the other hand, the built-in device drivers can limit hardware support.

Examples of popular bare-metal hypervisors are Microsoft Hyper-V, Citrix XenServer and VMware ESXi.

“Type 1 hypervisors provide increased security because their location in the physical hardware eliminates the attack surface often found in an OS.” – Stefani Munoz

Related Terms: bare metal cloud, hosted hypervisor, bare metal environment, bare metal restore, bare metal provisioning

What is Cloud Storage?

Cloud storage is a cloud computing model in which data is stored on remote servers accessed from the internet, or “cloud.” It is maintained, operated and managed by a cloud storage service provider on a storage servers that are built on virtualization techniques.

Cloud storage is also known as utility storage – a term subject to differentiation based on actual implementation and service delivery.

Cloud storage works through data center virtualization, providing end users and applications with a virtual storage architecture that is scalable according to application requirements. In general, cloud storage operates through a web-based API that is remotely implemented through its interaction with the client application’s in-house cloud storage infrastructure for input/output (I/O) and read/write (R/W) operations.

When delivered through a public service provider, cloud storage is known as utility storage. Private cloud storage provides the same scalability, flexibility and storage mechanism with restricted or non-public access.

What is Data Sandbox?

A data sandbox, in the context of big data, is a scalable and developmental platform used to explore an organization’s rich information sets through interaction and collaboration. It allows a company to realize its actual investment value in big data.

A data sandbox is primarily explored by data science teams that obtain sandbox platforms from stand-alone, analytic datamarts or logical partitions in enterprise data warehouses.

Data sandbox platforms provide the computing required for data scientists to tackle typically complex analytical workloads.

A data sandbox includes massive parallel central processing units, high-end memory, high-capacity storage and I/O capacity and typically separates data experimentation and production database environments in data warehouses.

The IBM Netezza 1000 is an example of a data sandbox platform which is a stand-alone analytic data mart. An example of a logical partition in an enterprise data warehouse, which also serves as a data sandbox platform, is the IBM Smart Analytics System.

A Hadoop cluster like IBM InfoSphere BigInsights Enterprise Edition is also included in this category.

What is Autoscaling?

Autoscaling provides users with an automated approach to increase or decrease the compute, memory or networking resources they have allocated, as traffic spikes and use patterns demand. Without autoscaling, resources are locked into a particular configuration that provides a preset value for memory, CPU and networking that does not expand as demand grows and does not contract when there is less demand.

Autoscaling is a critical aspect of modern cloud computing deployments. The core idea behind cloud computing is to enable users to only pay for what they need, which is achieved in part with elastic resources — applications and infrastructure that can be called on as needed to meet demand.

Autoscaling is related to the concept of burstable instances and services, which provide a baseline level of resources and then are able to scale up — or “burst” — as memory and CPU use come under demand pressure.

“Autoscaling resources can help organizations ensure they don’t pay for unused cloud capacity. Cloud providers offer native services with autoscaling features, such as AWS Auto Scaling. These features automatically monitor and adjust application scale to meet demands and can be used to prioritize cost, availability or performance. ” – Sarah Neenan

Related Terms: scalability, cloud bursting, cloud computing, Microsoft Azure Resource Manager, verticle scalability, AWS Auto Scaling

What is Rich Internet Application (RIA)?

A Rich Internet Application (RIA) is a Web application with many of the same features and appearances as a desktop application. A RIA requires a browser, browser plug-in or virtual machine to deliver a user application. Data manipulation is handled by the server, and user interface and related object manipulation are handled by the client machine.

An RIA usually does not require client machine installation. However, client machine operation requires installation of a platform – such as Adobe Flash, Java or Microsoft Silverlight. The RIA may request that the user download and install a required platform if the platform not present. Some RIAs run on several Internet browsers, while others run on specified browsers only.

An RIA usually operates in a sandbox, which is a designated desktop area in a client machine. The sandbox limits visibility and access to the client machine’s file system and OS. Sandbox parameters reduce inherent RIA security vulnerabilities.

What is Bitcoin Lightning Network?

The Bitcoin Lightning Network is a cryptocurrency protocol that works with blockchain ledger technology. It was created by Joseph Poon and Thaddeus Dryjain in 2017 and is now used to help manage cryptocurrencies such as bitcoin.

Experts refer to the Lightning Network as a “second layer” protocol that works with blockchain to make transactions streamlined between different nodes. Because participants in a transaction do not have to publicize that transaction on the blockchain immediately, this can save some time by using peer-to-peer network design to get around some of the latency in conventional cryptocurrency transactions.

What is Fulfillment Center?

A fulfillment center is a third-party logistics (3PL) warehouse where incoming orders are received, processed and filled. When an e-commerce seller uses a 3PL strategy for fulfillment, it relieves the seller from the responsibilities and costs associated with warehouse management and allows the seller to focus on marketing and sales efforts.

The seller can either receive and review goods prior to shipping them to the fulfillment center or have them sent to the warehouse directly from the manufacturer. If mistakes are made in fulfillment, the fulfillment center absorbs the costs. If the seller ships product from the fulfillment center and the customer returns the product, the fulfillment center typically absorbs re-stocking fees as well.

Using a fulfillment center to distribute a product can also reduce the cost of shipping it. This is because popular package delivery companies like United Parcel Service (UPS) are able to negotiate pricing with fulfillment centers because they are a reliable source of high-volume business.

“Third-party logistics refers to the outsourcing of one or more aspects of procurement or fulfillment to an outside organization that specializes in that area.” – Dave Turbide

Related Terms You Should Know: Fulfillment by Amazon, e-commerce, logistics, 3PL, warehouse control system

Free Tool

Mailcow is an open-source suite for running a self-hosted mailserver. It is a collection of different applications—like SOGo, Postfix and Dovecot—with an intuitive web interface for managing accounts.

Unimus is a multi-vendor network device backup and configuration management system aimed at making automation, disaster recover, change management and configuration auditing easy. Free for up to 5 device licenses.

FSumFrontend is a drag-and-drop tool that allows you to compute message digests, checksums and HMACs for files and text strings. Can handle multiple files at once.

TRex is an open-source tool that generates realistic L3-7 traffic for testing end-to-end network perfomance. Stateless functionality includes support for multiple streams, the ability to change any packet field and provides per-stream/group statistics, latency and jitter. Advanced Stateful functionality includes support for emulating L7 traffic with fully featured scalable TCP/UDP support. Emulation functionality includes client-side L3 protocols i.e ARP, IPv6, ND, MLD, IGMP, ICMP, DOT1X in order to simulate a scale of clients and servers. Can scale up to 200Gb/sec with one server.

Joplin is an open-source notetaking/to-do app that can sync via plain text files for optimal flexibility. Notes can be organized in notebooks; are searchable; and can be copied, tagged and edited from the applications directly or from your text editor.

Bees With Machine Guns is a utility for creating micro EC2 instances to load test web applications. You simply enter a target url and an army of “bees” will simulate traffic originating from several different sources to hit the target.

kube-state-metrics is an add-on agent that listens to your Kubernetes API server to generate metrics on the state of objects like deployments, nodes and pods. Exposes raw data so you can get it unmodified and perform your own heuristics.

ViewDNS offers a nice, online collection of DNS and OSINT tools. The tools are also offered as an API to give webmasters the ability to easily integrate them into their own sites. A free API “sandbox” account has a monthly limit of 250 queries.

Blog

DMAC Network Automation Blog is where network engineer Daniel Macuare shares his passion for solving problems with code and improving the state of network infrastructure. You’ll find original articles, automation ideas and how-tos.

Tutorial

Shell Scripting Tutorial covers some of the basics of shell scripting and helps explain the powerful potential of programming available in the Bourne shell.

This excellent blog post explains exactly how to use the GPOZaurr command. Highly recommend getting familiar with the GPOZaurr powershell module that in minutes can produce an excel doc of all your gpo’s, let you know which ones have issues, reveal passwords stored in GPO’s and much more.

Lawrence Systems Blog offers video tutorials on firewalls, storage solutions, MSP tools, security tools and open-source topics. There’s also discussion on some of the products and solutions they’ve worked with in addressing problems for their clients.

Robert McMillan’s YouTube Channel offers videos that teach how to solve various complex problems—with a focus on speed. The videos quickly cover the essentials, so you can get the answers you need without a lot of extraneous detail. McMillan is an IT consultant, MCT and college instructor with over 50 technical certifications.

NetworkChuck Video Channel features tutorials on pretty much any IT certification area you might be pursuing offered by a CBT Nuggets Trainer. Covers Cisco, CompTIA, AWS and Microsoft with a focus on teaching the concepts in a way that is actually fun.

Cheatsheet

Sed Cheatsheet is Eric Pement’s handy reference to help facilitate Sed scripting.

Awk Cheatsheet is a collection of one-line Awk scripts compiled into a time-saving resource by Eric Pement.

The Most Common OpenSSL Commands is a list of essential commands and their usage for those who want to leverage the incredible versatility of OpenSSL but aren’t all that comfortable dealing with certs.

Podcast

On-Call Nightmares Podcast features the intriguing tales of those brave souls who work on-call in technology. Host Jay Gordon interviews the “survivors” as they share some of their nightmare experiences in trying to understand and resolve the problems that got dropped in their laps.

Documentation Resource

Affinity symbol set is a collection of printable, manufacturer-independent 2D icons you can use in your computer network diagrams.