IT Managed Services Provider Resource Recommendation Update on January 30, 2021

Unchecky is a quick answer to installers that try to push crapware or system modifications by requiring you to uncheck boxes at installation. Should you miss unchecking a box, you end up having to remove programs or reconfigure later on. Unchecky automatically unchecks unrelated installs and warns you about potentially suspect offers.

QuickLook offers a quick preview of file contents when you press the spacebar.

Microsoft Virtual Training Days are 1-2 day virtual events for enhancing your skills. Take advantage of expert webinars on Microsoft Azure, Microsoft 365, Microsoft Dynamics 365, or Microsoft Power Platform and interact with Microsoft experts.

Shodan is a search engine for Internet-connected devices that allows you to discover all the IoT devices on your network. Find out what is connected, where it’s located and with whom it’s communicating.

Tech Support Cheat Sheet is the answer for those tired of being expected to know how to use every piece of software that has ever been written, regardless of whether it has anything to do with your job. This all-purpose how-to is the perfect addition to your arsenal of user training materials.

f.lux changes the color temperature of your display based on the time of day, which can be far easier on your eyes.

Knowledge of terms to know

What is Memory Leak?

A memory leak is the gradual loss of available computer memory when a program repeatedly fails to return memory that it has obtained for temporary use.

Memory leaks are often caused by programming errors in computer code. For example, a programmer might dynamically allocate memory space for a variable of some type, but then forget to free up that space before the program completes. Should this happen, the system will have less free memory to use each time the program runs. Repeated running of the program or function that causes such memory loss can eventually crash the system.

Memory leaks can be exploited by attackers to gain access to sensitive data or purposely create a denial-of-service (DoS) condition. A garbage collector (GC) can help minimize memory leaks by automatically detecting when data is no longer needed and freeing up storage accordingly.

What is Process Mining Software?

Process mining software is a type of programming that analyzes data in enterprise application event logs in order to learn how business processes are actually working. Process mining software provides the transparency administrators need to identify bottlenecks and other areas of inefficiency so they can be improved.

When the software is used to analyze the transaction or audit logs of a workflow management system, data visualization components in the software can show administrators what processes are running at any given time. Some process mining apps also allow users to drill down to view the individual documents associated with a process.

If a process model doesn’t already exist, the software will perform business process discovery to create one automatically, sometimes with the aid of artificial intelligence and machine learning. If there already is a model, the process mining software will compare it to the event log to identify discrepancies and their possible causes.

Process mining software is especially useful for optimizing workflows in process-oriented disciplines such as business process reengineering (BPR). The technology is often applied to the most common and complex business processes executed in most organizations, such as order to cash, accounts payable and supply chain management.

What is Transportation Management System (TMS)?

A transportation management system (TMS) is specialized software for planning, executing and optimizing the shipment of goods. Users perform three main tasks in a TMS: Find and compare the rates (prices) and services of carriers available to ship a customer’s order, book the shipment and then track its movement to delivery.
The broader goals of using a TMS are to improve shipping efficiency, reduce costs, gain real-time supply chain visibility and ensure customer satisfaction.

Shippers and carriers are the primary users of TMS software. Manufacturers, distributors, e-commerce organizations, wholesalers, retailers and third-party logistics providers (3PLs) are also major users of TMS software.

A TMS is one of the core technologies used in supply chain management (SCM), a discipline that encompasses supply chain execution (SCE) and supply chain planning (SCP). TMSes are available as standalone software or as modules within enterprise resource planning (ERP) and SCM suites.

While some TMSes focus on a single mode of transportation, most systems support multimodal and intermodal transportation. In multimodal, a single carrier uses at least two modes of transportation — truck, rail, air or sea — and is legally liable for meeting the terms of the contract, though it might hire subcarriers. Intermodal transportation refers to shipments that require more than one carrier and contract. Intermodal gives shippers more control over carriers, prices and modes of transportation but makes them more responsible for managing the process.

TMSes have gained traction over the past decade as enablers of global trade and logistics. Gartner, in its March 2020 Magic Quadrant report, reported that, starting in 2018, the global TMS market was growing at a five-year compound annual growth rate (CAGR) of 11.1%, would reach $1.94 billion by 2022 and account for nearly a third of the SCE market.

What is Software Engineering?

Software engineering is the process of analyzing user needs and designing, constructing, and testing end-user applications that will satisfy these needs through the use of software programming languages. It is the application of engineering principles to software development.

In contrast to simple programming, software engineering is used for larger and more complex software systems, which are used as critical systems for businesses and organizations.

What is Logistics?

Logistics is the process of planning and executing the efficient transportation and storage of goods from the point of origin to the point of consumption. The goal of logistics is to meet customer requirements in a timely, cost-effective manner.

Originally, logistics played the vital role of moving military personnel, equipment and goods. While logistics is as important as ever in the military, the term today is more commonly used in the context of moving commercial goods within the supply chain.

Many companies specialize in logistics, providing the service to manufacturers, retailers and other industries with a large need to transport goods. Some own the full gamut of infrastructure, from jet planes to trucks, warehouses and software, while others specialize in one or two parts. FedEx, UPS and DHL are well-known logistics providers.

Typically, large retailers or manufacturers own major parts of their logistics network. Most companies, however, outsource the function to third-party logistics providers (3PLs).

What is Distributed System?

A distributed system is any network structure that consists of autonomous computers that are connected using a distribution middleware. Distributed systems facilitate sharing different resources and capabilities, to provide users with a single and integrated coherent network.

The opposite of a distributed system is a centralized system. If all of the components of a computing system reside in one machine, as was the case with early mainframes such as Von Neumann machines, it is not a distributed system.

What is Data Integration?

Data integration is the process of combining data from multiple source systems to create unified sets of information for both operational and analytical uses. Integration is one of the core elements of the overall data management process; its primary objective is to produce consolidated data sets that are clean and consistent and meet the information needs of different end-users in an organization.

Integrated data is fed into transaction processing systems to drive business applications and into data warehouses and data lakes to support business intelligence (BI), enterprise reporting, and advanced analytics. Various data integration methods have been developed for different types of uses, including batch integration jobs run at scheduled intervals and real-time integration done continuously.

Importance of data integration

Most organizations have a collection of data sources, often including external ones. In many cases, business applications and operational workers need to access data from different sources to complete transactions and other tasks. For example, an online order entry system requires data from customers, product inventory, and logistics databases to process orders; call center agents must be able to see the same combination of data to resolve issues for customers.

Loan officers have to check account records, credit histories, property values, and other data before approving mortgages. Financial traders need to keep an eye on incoming streams of market data from internal systems and external sources. Pipeline operators and plant managers depend on data from various sensors to monitor equipment. In these and other applications, data integration automatically pulls together the necessary data for users so they don’t have to combine it manually.

It’s the same in BI and analytics systems: Data integration gives data analysts, corporate executives, and business managers a complete picture of key performance indicators (KPIs), customers, manufacturing and supply chain operations, regulatory compliance efforts, financial risks, and other aspects of business processes. As a result, they have better analytical information available for uses such as tracking business performance, managing operations, and planning advertising and marketing campaigns.

What is Network?

A network, in computing, is a group of two or more devices or nodes that can communicate. The devices or nodes in question can be connected by physical or wireless connections. The key is that there are at least two separate components, and they are connected.

The scale of a network can range from a single pair of devices or nodes sending data back and forth, to massive data centers and even the global Internet, the largest network in existence. What all of these networks have in common, from the smallest ones to the largest, is that they allow computers and/or users to share information and resources. Networks may be used for:

  • Communications such as email, instant messaging, chat rooms, etc.
  • Shared hardware such as printers and input devices.
  • Shared data and information through the use of shared storage devices.
  • Shared software, which is achieved by running applications on remote computers.

What is Cloud Automation?

Cloud automation is a broad term that refers to the processes and tools an organization uses to reduce the manual efforts associated with provisioning and managing cloud computing workloads and services. IT teams can apply cloud automation to private, public, and hybrid cloud environments.

Traditionally, deploying and operating enterprise workloads was a time-consuming and manual process. It often involved repetitive tasks, such as sizing, provisioning, and configuring resources like virtual machines (VMs); establishing VM clusters and load balancing; creating storage logical unit numbers (LUNs); invoking virtual networks; making the actual deployment, and then monitoring and managing availability and performance.

Although each of these repetitive and manual processes is effective, they are inefficient and often fraught with errors. These errors can lead to troubleshooting, which delays the workload’s availability. They might also expose security vulnerabilities that can put the enterprise at risk. With cloud automation, an organization eliminates these repetitive and manual processes to deploy and manage workloads. To achieve cloud automation, an IT team needs to use orchestration and automation tools that run on top of its virtualized environment.

What is Macro?

A macro is an automated input sequence that imitates keystrokes or mouse actions. A macro is typically used to replace a repetitive series of keyboard and mouse actions and is used often in spreadsheets and word processing applications like MS Excel and MS Word.

The file extension of a macro is common.MAC.

The concept of macros is also well-known among MMORPG gamers (Massively Multiplayer Online Role-Playing Games) and SEO (Search Engine Optimization) specialists. In the world of programming, macros are programming scripts used by developers to re-use code.

The term macro stands for “macro-instruction” (long instruction).

What is Data Dictionary?

A data dictionary is a file or a set of files that contains a database’s metadata. The data dictionary contains records about other objects in the database, such as data ownership, data relationships to other objects, and other data.

The data dictionary is a crucial component of any relational database. It provides additional information about relationships between different database tables, helps to organize data in a neat and easily searchable way, and prevents data redundancy issues.

Ironically, because of its importance, it is invisible to most database users. Typically, only database administrators interact with the data dictionary.

A data dictionary is also called a metadata repository.

What is Cloud infrastructure?

Cloud infrastructure refers to the hardware and software components — such as servers, storage, a network, virtualization software, services, and management tools — that support the computing requirements of a cloud computing model.

Cloud infrastructure also includes an abstraction layer that virtualizes and logically presents resources and services to users through application program interfaces and API-enabled command-line or graphical interfaces.

In cloud computing, these virtualized resources are hosted by a service provider or IT department and are delivered to users over a network or the internet. These resources include virtual machines and components, such as servers, memory, network switches, firewalls, load balancers, and storage.
In a cloud computing architecture, cloud infrastructure refers to the back-end hardware elements found within most enterprise data centers, but on a much greater scale. These include multisocket, multicore servers, persistent storage, and local area network equipment, such as switches and routers.

Major public cloud providers, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform, offer services based on shared, multi-tenant servers. This model requires massive compute capacity to handle both unpredictable changes in user demand and to optimally balance demand across fewer servers. As a result, cloud infrastructure typically consists of high-density systems with shared power.

Additionally, unlike most traditional data center infrastructures, cloud infrastructure typically uses locally attached storage, both solid-state drives (SSDs) and hard disk drives (HDDs), instead of shared disk arrays on a storage area network. The disks in each system are aggregated using a distributed file system designed for a particular storage scenario, such as an object, big data, or block. Decoupling the storage control and management from the physical implementation via a distributed file system simplifies scaling. It also helps cloud providers match capacity to users’ workloads by incrementally adding compute nodes with the requisite number and type of local disks, rather than in large amounts via a large storage chassis.
Cloud infrastructure is present in each of the three main cloud computing deployment models: private cloud, public cloud, and hybrid cloud. In a private cloud, an organization typically owns the cloud infrastructure components and houses them within its own data center. In a public cloud model, the cloud infrastructure components are owned by a third-party public cloud provider. A hybrid cloud consists of a mix of both models working together to form a single logical cloud for the user.

What is Business Process Outsourcing (BPO)?

Business process outsourcing (BPO) involves using a third-party provider company for any business process that could otherwise be done in-house, especially those considered “non-primary” business activities and functions.

Examples include the outsourcing of payroll, human resources (HR), accounting, and customer/call center relations, as well as various kinds of data gathering, front-line service work, among others.

Some types of BPO are also known as information technology-enabled services (ITES).