How to Do a Complete Magento Security Audit in 5 Simple Steps

April 28, 2020

A security audit requires you to think like a hacker. 

Security audits involve evaluating your vulnerabilities and assessing the current security measures and how well do they perform in case of an attempted hack.

We know that Magento is one of the most popular open-source Content Management Systems for e-commerce in the market today and deals with sensitive payment info on a daily basis. 

This enormous financial data quite obviously caught the eye of hackers. As a result, Magento is threatened continuously with attacks. These threats vary in the form of automated bad bots to a sophisticated hacker attack.

Uncovering all vulnerabilities & loopholes in your website is the first step in securing it.

This article explains to you how to perform a complete security audit of your Magento store. Also, mentioned are remedial solutions to some issues discovered while performing a Magento security audit.

1. Identify Audit Areas: Magento Security Audit

One of the first things that a hacker does on your website is to recognize the type of CMS, server OS, and other basic details such as:

  • Magento version.
  • PHP version.
  • Magento Modules.
  • Other software technologies.

Knowing this provides a roadmap to the attacker. 

For example, if you still use outdated versions of Magento, the attacker can exploit the known vulnerabilities which existed in older versions of Magento.

Hence, the first step in carrying out a Magento security audit remains that you find these details. There are shrewd ways hackers use to know this. I am listing some of these here.

a) Know Magento Version the Automatic Way

"Blind Elephant" is a popular tool of Kali Linux used for identifying the type of CMS being used. To identify the CMS, open the terminal in your Kali and type BlindElephant.py followed by the URL of the website and CMS you wish to scan i.e.  

BlindElephant.py http://192.168.1.252/ Magento

For more help, type:

BlindElephant.py -h

After the CMS has been confirmed as Magento, to further enumerate the CMS specific details, use MageScan. 

MageScan is a tool that can discover not only the Magento version but, installed modules, catalog info, etc. To download and use this tool, visit its Github repo.

Magento Security Audit

MageScan uncovering details of a Magento site

b) Know Magento Version the Manual Way

In Magento 2, it has become easier to check the Magento Version that is being used. 

Simply append "/magento_version" after the website name, and lo behold! It shall reveal to you the Magento version and to everyone else including automated tools.

Remediation

  • If your Magento version is outdated, update it now!
  • If you still use Magento 1, migrate to Magento 2.
  • Use the latest Magento version i.e. Magento 2.3.x.
  • Do not use outdated, ill-reputed modules.
  • Use custom error messages.
  • To hide your Magento version, connect to your site via SSH and execute the following command:

    php bin/magento module:disable Magento_Version

2. Discover Content: Magento Security Audit

Once the Magento version has been discovered, the next step is to discover content i.e. Magneto directories, Admin Panel, etc. 

Tools like Dirb can brute force and discover various common directories and files.

In case the directories do not have proper permissions, they can leak sensitive info to the attackers. Moreover, if the admin path is set to default (www.example.com/admin/), the attackers can discover and brute force login to your Magento store.

To brute-force, the directories on your site, open up the terminal in Kali and type "dirb" followed by your site URL. i.e.

dirb http://webscantest.com/

Magento Security Audit

Remediations

  • Set proper file and folder permissions. 
  • Enable two-factor authentication.
  • Do not use weak or default passwords.
  • Configure Captcha in Magento 2 by visiting  Stores<Configuration<Customer<Customer Configuration<Captcha.
  • To change the default admin path, connect to your site via SSH and run the following command:

    php bin/magentosetup:config:set --backend-frontname="myAdmin"

    Replace myAdmin with any random name of your choice.

3. Find Server Misconfigurations: Magento Security Audit

While the original CMS may be secure, it is quite possible that there may be some vulnerabilities on your server. These vulnerabilities also need to be checked during a security audit. Some common server misconfigurations that you can check during a Magento security audit are:

a) Open Ports

Open ports imply that a certain TCP or UDP port is accepting packets. Most probably, there is a service running behind an open port. 

While an open port in itself is not a bad thing but, services running behind an open port can be exploited by an attacker. 

To check if your server has some open ports, NMAP is undoubtedly the best tool there is. It can:

  • Scan for open ports.
  • Fingerprint OS.
  • NSE scripts can be used for a variety of security audits like checking DDOS vulnerability, Heartbleed check, etc.

To scan TCP ports on your server, open up the terminal in Kali and type:

nmap -sT xxx.xxx.xxx.xxx

Replace the xxx.xxx.xxx.xxx with IP address you wish to scan for.

Magento security audit nmap

b) Weak Cryptographic Implementations

Cryptography plays a vital role in ensuring that the communication between your Magento store and clients is secure. So, one such application of cryptography is SSL. If your site does not have an SSL certificate get one now!

In case you use SSL, make sure that it is not vulnerable to bugs like Poodle, Heartbleed, etc. To check SSL implementation on your Magento store for free visit this site

Nmap scripts can also be used to check for SSL vulnerabilities like Heartbleed. Just open the terminal in Kali and type:

nmap -sV -p 443 --script=ssl-heartbleed xxx.xxx.xxx.xxx

Now replace xxx.xxx.xxx.xxx with the IP of server you want to scan. Similarly, you can use Nmap scripts to scan for other vulnerabilities like drown, poodle, etc.

Remediations

  • Use a firewall to block open ports.
  • Use SSL.
  • Avoid using weak ciphers.
  • Avoid using shared hosting. Use a dedicated VPS if possible.

4. Eliminate Injection Vulnerabilities

Various types of injection vulnerability arise due to poor coding standards. Therefore, they regularly feature in OWASP top 10 vulnerabilities. Although, chances of finding an injection vulnerability in Magento core by you are rare (unless you are a security expert) but plausible. Since Magento is open-source, security researchers have found various XSS, CSRF, SQLi bugs in the past. What you can do is vet the various Magento extensions for injection vulnerabilities.

a) SQL Injection

SQLi is caused when the user input is not properly sanitized and it reaches the database and gets executed. The results can be disastrous for your Magento store as the attackers can get hold of login credentials and inject spam, credit card skimmers, etc.

To audit your Magento store for SQLi, perhaps there is no better tool than Sqlmap. This tool can automatically find and exploit SQLi bugs. 

To use Sqlmap, open the terminal in Kali and type:

sqlmap -u "www.your-site.com/file?param1=&param2=" --batch

Here replace your-site.com with the site you wish to audit. The param1 and param2 stand for the parameters you wish to check SQLi for. This is just a simple explanation, for more details type:

sqlmap -h

magento security audit SQLMAP

b) Cross-Site Scripting

Not surprisingly, XSS is also caused due to a lack of proper user input sanitization. By exploiting an XSS, the attacker can run malicious JavaScript on your server. Not going into much into the detail, all you should know is that whatever malicious things an attacker can do using JavaScript can all be done by exploiting an XSS bug.

So, to audit your Magento extensions for any XSS bug, the most suited tool is the Xsser. This tool can even bypass certain web application firewalls and exploit an XSS bug. Also, for novice users, the GUI option is available too. 

To learn how to use it, open the terminal in Kali and type:

xsser -h

magento security audit Xsser

Remediations

  • Follow secure development practices i.e. using prepared statements, implementing CSRF tokens, etc.
  • All user input should be sanitized.

5. Identify Business Logic Flaws

Business logic defines the processing and flow of data on your Magento store. In simple words, the user logs in; selects an item; adds it to the cart; then goes to the checkout page and finally pays to complete the process. All these constitute your business logic.

A business logic flaw, therefore, means that due to a lack of proper safeguards, the malicious user can control any of these steps. For instance, the user can edit the rate of an item on your Magento store and buys it for a lower price or even free!

This is just one example, there is a number of possible things that can go wrong. What makes Magento business logic flaws more serious is that they cannot be detected by security scanners. 

Also, when a business logic flaw is exploited by the attacker, the firewall or IDS (Intrusion Detection System) may have no idea what's going on.

Remediations

  • Chances of finding business logic flaws in the Magento core are very low. But if you use extensions then seriously consider looking for the business logic flaws in them. 

Conclusion

To conclude, it can be said that the Magento security audit can be conducted with minimal resources. 

However, there are a lot of things that can go wrong. Covering all is beyond the scope of this article but a blueprint has been given. 

So, the least you can do as a regular user is to follow Magento security best practices (including regular security audits). These security measures will harden your Magento website’s security against most cyber ills.

Tell us how you liked this blog post in the comments.

Latest Blogs
This is a decorative image for Clustering in deep learning- A acknowledged tool.
June 27, 2022

Clustering in deep learning- A acknowledged tool

When learning something new about anything, such as music, one strategy may be to seek relevant groupings or collections. You may organize your music by genre, but your friend may organize it by the singer. The way you combine items allows you to learn more about them as distinct pieces of music, somewhat similar to what clustering algorithms do. 

Let’s discuss a detailed brief on Clustering algorithms, their applications, and how GPUs can be used to accelerate the potential of clustering models. 

Table of content-

  1. What is Clustering?
  2. How to do Clustering?
  3. Methods of Clustering. 
  4. DNN in Clustering. 
  5. Accelerating analysis with clustering and GPU. 
  6. Applications of Clustering.
  7. Conclusion. 

What is Clustering?

In machine learning, we typically group instances as a first step in interpreting a data set in a machine learning system. The technique of grouping unlabeled occurrences is known as clustering. Clustering is based on unsupervised machine learning since the samples are unlabeled. When the instances are tagged, clustering transforms into classification.

Clustering divides a set of data points or populations into groups so that data points in the same group are more similar to one another and dissimilar from data points in other groups. It is simply a collection of elements classified according to their similarity and dissimilarity.

How to do Clustering?

Clustering is critical because it determines the intrinsic grouping of the unlabeled data provided. There are no requirements for good clustering. It is up to the user to decide which criteria will be used to satisfy their demands. For example, we might be interested in locating representatives for homogenous groups (data reduction), locating "natural clusters" and defining their unknown qualities ("natural" data types), locating useful and appropriate groupings ("useful" data classes), or locating odd data items (outlier detection). This method must make assumptions about point resemblance, and each assumption results in a unique but equally acceptable cluster.

Methods of Clustering: 

  1. Density-Based Approaches: These methods assume clusters to be the dense region of the space, with some similarities and differences to the lower dense region. These algorithms have high accuracy and can combine two clusters. DBSCAN (Density-Based Spatial Clustering of Applications with Noise), OPTICS (Ordering Points to Identify Clustering Structure), and other algorithms are examples. 

  1. Methods Based on Hierarchy: The clusters created in this approach form a tree-like structure based on the hierarchy. The previously established cluster is used to generate new clusters. It is classified into two types. aggregative (bottom-up approach) dividing (top-down approach)

  1. Partitioning Methods: These methods divide the items into “k” clusters, with each split becoming a separate cluster. This approach is used to optimize an objective criterion similarity function, such as K-means, CLARANS (Clustering Large Applications Based on Randomized Search), and so on. 

  1. Grid-based Methods: The data space is divided into a finite number of cells that create a grid-like structure in this approach. STING (Statistical Information Grid), wave cluster, CLIQUE (Clustering In Quest), and other clustering processes performed on these grids are rapid and independent of the number of data items.

DNN in clustering-

In Deep Learning, DNNs serve as mappings to better representations for clustering. The properties of these representations might be drawn from different layers of the network, or even from many. This choice may be divided into two categories:

  • One layer: Refers to the general scenario in which just the output of the network's last layer is used. This method makes use of the representation's low dimensionality. 

  • Several layers: This representation is a composite of the outputs of several layers. As a result, the representation is more detailed and allows for embedded space to convey more sophisticated semantic representations, potentially improving the separation process and aiding in the computation of similarity.

Accelerating analysis with clustering and GPU-

Clustering is essential in a wide range of applications and analyses, but it is now facing a computational problem as data volumes continue to grow. One of the most promising options for tackling the computational barrier is parallel computing using GPUs. Because of their huge parallelism and memory access-bandwidth benefits, GPUs are an excellent approach to speed data-intensive analytics, particularly graph analytics. The massively parallel architecture of a GPU, which consists of thousands of tiny cores built to handle numerous tasks concurrently, is ideally suited for the computing job. This may be used for groups of vertices or edges in a big graph.

In an analysis using data, clustering is a task with high parallelism that can be expedited using GPUs. In the future, GPUs will include spectral and hierarchical clustering/partitioning approaches based on the minimal balanced cut metric. 

Applications of Clustering-

The clustering approach may be applied to a wide range of areas. The following are some of the most popular applications of this technique: 

  • Segmentation of the Market: Cluster analysis, in the context of market segmentation, is the application of a mathematical model to uncover groups of similar consumers based on the smallest variances among customers within each group. In market segmentation, the purpose of cluster analysis is to precisely categorize customers in order to create more successful customer marketing through personalization. 

  • Recommendation engine: Clustering may be used to solve a number of well-known difficulties in recommendation systems, such as boosting the variety, consistency, and reliability of suggestions; the data sparsity of user-preference matrices; and changes in user preferences over time.

  • Analysis of social networks: Clustering in social network analysis is not the same as traditional clustering. It necessitates classifying items based on their relationships as well as their properties. Traditional clustering algorithms group items only on their similarity and cannot be used for social network research. A social network clustering analysis technique, unlike typical clustering algorithms, can classify items in a social network based on their linkages and detect relationships between classes. 

  • Segmentation of images: Segmentation of images using clustering algorithms is a method for doing pixel-wise image segmentation. The clustering algorithm here aims to cluster the pixels that are close together in this form of segmentation. There are two ways to conduct segmentation via clustering - Merging Clustering and Divisive Clustering

  • Detecting Anomaly: Clustering may be used to train the normalcy model by grouping comparable data points together into clusters using a distance function. Clustering is appropriate for anomaly detection since no knowledge of the attack classes is required during training. Outliers in a dataset can be found using clustering and related approaches.

Conclusion-

Clustering is an excellent method for learning new things from old data. Sometimes the resultant clusters will surprise you, and it may help you make sense of an issue. One of the most interesting aspects of employing clustering for unsupervised learning is that the findings may be used in a supervised learning issue. 

Clusters might be the new features that you employ on a different data set! Clustering may be used on almost every unsupervised machine learning issue, but make sure you understand how to examine the results for accuracy.

Clustering is also simple to apply; however, several essential considerations must be made, such as dealing with outliers in your data and ensuring that each cluster has a sufficient population.

This is a decorative image for How GPUs are affecting Deep Learning inference?
June 27, 2022

How GPUs are affecting Deep Learning inference?

The training step of most deep learning systems is the most time-consuming and resource-intensive. This phase may be completed in a fair period of time for models with fewer parameters, but as the number of parameters rises, so does the training time. This has a two-fold cost: your resources will be engaged for longer, and your staff will be left waiting, squandering time. 

We'll go through how GPUs manage such issues and increase the performance of deep learning inferences like multiclass classification and other inferences. 

Table of Content:

  1. Graphical Processing Unit (GPU)
  2. Why GPUs?
  3. How GPUs improved the performance of Deep Learning Inferences?
  4. Critical Decision Criteria for Inference 
  5. Which hardware should you use for DL inferences? 
  6. Conclusion

Graphical Processing Units (GPU)

A graphics processing unit (GPU) is a specialized hardware component capable of performing many fundamental tasks at once. GPUs were created to accelerate graphics rendering for real-time computer graphics, especially gaming applications. The general structure of the GPU is similar to that of the CPU; both are spatial architectures. Unlike CPUs, which have a few ALUs optimized for sequential serial processing, the GPU contains thousands of ALUs that can do a huge number of fundamental operations at the same time. Because of this exceptional feature, GPUs are a strong competitor for deep learning execution.

Why GPUs?

Graphics processing units (GPUs) can help you save time on model training by allowing you to execute models with a large number of parameters rapidly and efficiently. This is because GPUs allow you to parallelize your training activities, divide them across many processor clusters, and perform multiple computing operations at the same time.

GPUs are also tuned to execute certain jobs, allowing them to complete calculations quicker than non-specialized technology. These processors allow you to complete jobs faster while freeing up your CPUs for other duties. As a result, bottlenecks caused by computational restrictions are no longer an issue.

GPUs are capable of doing several calculations at the same time. This allows training procedures to be distributed and can considerably speed up deep learning operations. You can have a lot of cores with GPUs and consume fewer resources without compromising efficiency or power. The decision to integrate GPUs in your deep learning architecture is based on various factors: Memory bandwidth—GPUs, for example, can offer the necessary bandwidth to support big datasets. This is due to the fact that GPUs have specialized video RAM (VRAM), which allows you to save CPU memory for other operations. Dataset size—GPUs can scale more readily than CPUs, allowing you to analyze large datasets more quickly. The more data you have, the more advantage you may get from GPUs. Optimization—one disadvantage of GPUs is that it might be more difficult to optimize long-running individual activities than it is with CPUs.

How GPUs improved the performance of Deep Learning Inferences?

Multiple matrix multiplications make up the computational costly element of the neural network. So, what can we do to make things go faster? We may easily do this by performing all of the processes at the same time rather than one after the other. In a nutshell, this is why, when training a neural network, we utilize GPUs (graphics processing units) rather than CPUs (central processing units). 

Critical Decision Criteria for Inference-

 

The speed, efficiency, and accuracy of these projections are some of the most important decision factors in this phase of development. If a model can't analyze data quickly enough, it becomes a theoretical exercise that can't be used in practice. It becomes too expensive to run in manufacturing if it consumes too much energy. Finally, if the model's accuracy is inadequate, a data science team will be unable to justify its continuous usage. Inference speed, in particular, can be a bottleneck in some scenarios and instances, such as Image Classification, which is utilized in a variety of applications such as social media and image search engines. Even though the tasks are basic, timeliness is crucial, especially when it comes to public safety or platform infractions. 

Self-driving vehicles, commerce site suggestions, and real-time internet traffic routing are all instances of edge computing or real-time computing. Object recognition inside 24x7 video feeds, as well as large volumes of images and videos. Pathology and medical imaging are examples of complex images or tasks. These are some of the most difficult photos to decipher. To achieve incremental speed or accuracy benefits from a GPU, data scientists must now partition pictures into smaller tiles. These cases necessitate a decrease in inference speed while also increasing accuracy. Because inference is often not as resource-intensive as training, many data scientists working in these contexts may start with CPUs. Some may resort to leveraging GPUs or other special hardware to obtain the performance or accuracy enhancements they seek as inference speed becomes a bottleneck.

Which hardware should you use for DL inferences? 

There are several online recommendations on how to select DL hardware for training, however, there are fewer on which gear to select for inference. In terms of hardware, inference and training may be very distinct jobs. When faced with the decision of which hardware to use for inference, you should consider the following factors: How critical is it that my inference performance (latency/throughput) be good? Is it more important for me to maximize latency or throughput? Is the typical batch size for my company modest or large? How much of a financial sacrifice am I ready to make in exchange for better results? Which network am I connected to? 

You know how we choose inference hardware? We start by assessing throughput performance. The V100 clearly outperforms the competition in terms of throughput, especially when employing a big batch size (8 images in this case). Furthermore, because the YOLO model has a significant parallelization potential, the CPU outperforms the GPU in this metric.

Conclusion-

We looked at the various hardware and software techniques that have been utilized to speed up deep learning inference. We began by explaining what GPUs are, why they are needed, how GPUs increased the performance of Deep Learning Inferences, the essential choice criteria for the deep learning model and the hardware that should be employed. 

There is little question that the area of deep learning hardware will grow in the future years, particularly when it comes to specialized AI processors or GPUs. 

How do you feel about it? 

This is a decorative image for Understanding PyTorch
June 27, 2022

Understanding PyTorch

Every now and then, a library or framework emerges that completely changes the way we think about deep learning and aids in the advancement of deep learning studies by making them computationally quicker and less costly. Here we will be discussing one such library: PyTorch.

Overview-

PyTorch is the library or framework for Python scripts that make deep learning projects easier to create. PyTorch's approachability and ease of use drew a large number of early adopters from the academic, research, and development communities. And it has developed into one of the most popular deep learning tools across a wide range of applications in the years after its first release.

PyTorch has two primary characteristics or features at its core: An n-dimensional Tensor that works similarly to NumPy but on GPUs and the other is the construction and training of neural networks using automatic differentiation. Apart from these primary features, PyTorch includes a number of other features, which are detailed below in this blog.

PyTorch Tensor-

Numpy is a fantastic framework, however, it is unable to use GPUs to speed up numerical operations. GPUs can frequently deliver speedups of 50x or more for contemporary deep neural networks and today's parallel computing methods may take advantage of GPUs much more. 

To train many models at once PyTorch offers distributed training, allowing academic practitioners and developers to parallelize their work. Using many GPUs to process bigger batches of input data the training of models can be made feasible with distributed training, as a result, the computation time is reduced.

The Tensor, the most fundamental PyTorch concept, is capable to do so. A PyTorch Tensor is basically the same as a NumPy array: a Tensor is an n-dimensional array, and PyTorch has several methods for working with them. Tensors may maintain track of a computational graph and gradients behind the scenes, but they can also be used as a general tool for scientific computing. PyTorch Tensors, unlike NumPy, may use GPUs to speed up their numeric operations. You just need to provide the suitable device to execute a PyTorch Tensor on GPU.

Automatic Differentiation-

Automatic differentiation is a method used by PyTorch to record all of our operations and then compute gradients by replaying them backward. Generally while training neural networks, developers have to manually implement both forward and backward passes. While manually implementing backward pass is easy but doing the same for forward pass might get a bit tricky or exhausting task. This is exactly what the autograd package in PyTorch does. 

When you use autograd, your network's forward pass will construct a computational graph, with nodes being Tensors and edges being functions that produce output Tensors from input Tensors. Because we calculate the gradients on the forward pass, this approach allows us to save time on each epoch.  You may also simply compute gradients by back propagating across this graph.

Flow control and weight sharing-

 

PyTorch implements a weird model as an example of dynamic graphs and weight sharing: a third-fifth order polynomial that selects a random integer between 3 and 5 and utilizes that many orders on each forward pass, recycling the same weights several times to calculate the fourth and fifth order. We can construct the loop in this model using standard Python flow control, and we can achieve weight sharing by simply repeating the same argument many times.

Torchscript-

TorchScript allows you to turn PyTorch code into serializable and optimizable models. Any TorchScript application may be saved from a Python process and loaded into another process that doesn't require or doesn’t have a Python environment.

Pytorch has tools for converting a model from a pure Python program to a TorchScript program that can be executed in any standalone application such as of C++. This allows users to train models in PyTorch using familiar Python tools before exporting the model to a production environment where Python applications may be inefficient due to performance and multi-threading issues.

Dynamic Computation Graphs-

In frameworks like PyTorch, you usually have a set up of the computational network and a distinct execution mechanism than the host language. This unusual design is largely motivated by the need for efficiency and optimization. DL frameworks keep track of a computational graph that specifies the sequence in which calculations must be completed in a model. Researchers have found it difficult to test out more creative ideas because of this inconvenient arrangement.

There are two such types of computational graphs, one is static and the other is dynamic. Variable sizes must be established at the start with a static network i.e. when the graph is Static all the variables are to be created and connected in the beginning, and then later is settled up in a static (non-changing) session which might be inconvenient for some applications, such as NLP as for NLP Dynamic computational graphs are critical since language or input can arrive in a variety of expression lengths.

PyTorch, on the other hand, employs a dynamic graph. That is, the computational graph is constructed dynamically once variables are declared. As a result, after each training cycle, this graph is regenerated. Dynamic graphs are adaptable, allowing us to change and analyze the graph's internals at any moment. 

When all you had before were "goto" commands, introducing dynamic computational graphs is like introducing the idea of a process. We may write our programs in a composable manner thanks to the idea of the procedure. Of course, one may argue that DL designs do not require a stack. Recent research on Stretcher networks and Hyper networks, on the other hand, demonstrates this. Context switching, such as a stack, appears to be beneficial in some networks in studies.

nn Module

Autograd and computational graphs are a powerful paradigm for automatically generating sophisticated operators and computing derivatives; nevertheless, raw autograd may be too low-level for huge neural networks. We often consider stacking the computation when developing neural networks, with some layers containing learnable parameters that will be tweaked throughout the learning process. 

In such cases, we can make use of PyTorch’s nn module. The nn package defines modules, which are fundamentally equivalent to neural network layers. A Module can contain internal data such as Tensors with learnable parameters in addition to taking input Tensors and computing output Tensors.

Conclusion-

In this blog, we understood how PyTorch is different from other libraries like NumPy, What are the special features that it offers including Tensor computing with substantial GPU acceleration and a tape-based autograd system used to build deep neural networks. 

We also studied other features like flow control and weight sharing, torch scripts, computation graphs, and nn module. 

This description was adequate to gain a general notion of what PyTorch is and how academicians, researchers, and developers may utilize it to construct better projects.

Build on the most powerful infrastructure cloud

A vector illustration of a tech city using latest cloud technologies & infrastructure