Meltdown/Spectre patches for E2E Networks Cloud & Dedicated servers

January 11, 2018

What are Meltdown and Spectre?Meltdown breaks the mechanism that keeps applications from accessing arbitrary system memory. Consequently, applications can access system memory. Spectre tricks other applications into accessing arbitrary locations in their memory. Both attacks use side channels to obtain the information from the accessed memory location. For a more technicaldiscussion, we refer to the papers (Meltdownand Spectre)Several microarchitectural (hardware) implementation issues affecting many modern microprocessors havesurfaced recently. As explained in Red Hat'ssecurity advisory, fixing these requires"updates to the Linux kernel, virtualization-related components, and/or in combination with a microcode update. An unprivileged attacker can use these flaws to bypass conventional memory security restrictions in order to gain read access to privileged memory that would otherwise be inaccessible. There are 3 known CVEs related to this issue in combination with Intel, AMD, and ARM architectures. All three rely upon the fact that modern high performance microprocessors implement both speculative execution, and utilize VIPT (Virtually Indexed, Physically Tagged) level 1 data caches that may become allocated with data in the kernel virtual address space during such speculation.


  • CVE-2017-5753 (variant #1/Spectre) is a Bounds-checking exploit during branching. This issue is fixed with a kernel patch. Variant #1 protection is always enabled; it is not possible to disable the patches. Red Hat’s performance testing for variant #1 did not show any measurable impact.
  • CVE-2017-5715 (variant #2/Spectre) is an indirect branching poisoning attack that can lead to data leakage. This attack allows for a virtualized guest to read memory from the host system. This issue is corrected with microcode, along with kernel and virtualization updates to both guest and host virtualization software. This vulnerability requires both updated microcode and kernel patches. Variant #2 behavior is controlled by the ibrs and ibpb tunables (noibrs/ibrs_enabled and noibpb/ibpb_enabled), which work in conjunction with the microcode.
  • CVE-2017-5754 (variant #3/Meltdown) is an exploit that uses speculative cache loading to allow a local attacker to be able to read the contents of memory. This issue is corrected with kernel patches. Variant #3 behavior is controlled by the pti tunable (nopti/pti_enabled).

Patching instructions for Customers using E2E Cloud or VIRTUAL MACHINES -

Current status: E2E Cloud Infrastructure utilizes Xen Paravirtualization for the best possible performance. Virtual machine kernels running in 64-bit PV mode are not directly vulnerable to attack using Meltdown, because 64-bit PV guests already run in a KPTI-like mode.[CentOS Users] - The currently released patched kernel from Red Hat causes the virtual machines to not boot on Xen PV. This has been separately confirmed by people in the AWS and Citrix communities - are awaiting revised kernel packages from Red Hat which will be suitable for use by our cloud customers. We will send out an update when they become available. For now, please continue with the older stable non-patched kernel in your CentOS virtual machines.[Ubuntu and Debian Users] Please follow the same instructions as provided for users of dedicated machines below:-

Patching instructions for Customers using DEDICATED MACHINES -

The following sections give information pertaining to available updates for CentOS, Ubuntu and Debian distributions.Update all affected packages. Update your kernel and reboot into the same. You may ignore qemu-kvm and libvirt packages unless you are using virtualization packages.For more information on optionally disabling the fixes while using the new kernels, see the Red Hat article in the Notes section at the end.

Fix on CentOS

[Note] If you are a CentOS user using cloud/virtual machines, _do not_ proceed with the kernel upgrades. Please see patching instructions for CentOS virtual machines in the previous section of this document.$ sudo yum update kernel microcode_ctl linux-firmware qemu-kvm libvirtEdit /boot/grub/grub.conf on CentOS 6 such that default=0 is set, signifying that the latest kernel (mentioned at the top of the list of boot entries) should be booted.On CentOS 6, the first 8 uncommented lines of grub.conf should look like this -default=0timeout=5splashimage=(hd0,0)/grub/splash.xpm.gzhiddenmenutitle CentOS (2.6.32-696.18.7.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-696.18.7.el6.x86_64 ro root=/dev/mapper/storage-root rd_NO_LUKS LANG=en_US.UTF-8 rd_MD_UUID=85d9e5f1:57836183:aebaae46:2601caca SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=storage/root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet initrd /initramfs-2.6.32-696.18.7.el6.x86_64.imgOn CentOS 7, verify /boot/grub2/grub.cfg -grep -A1 "BEGIN /etc/grub.d/10_linux" /boot/grub2/grub.cfg ### BEGIN /etc/grub.d/10_linux ###menuentry 'CentOS Linux (3.10.0-693.11.6.el7.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-693.el7.x86_64-advanced-93c83fb8-fd60-445a-8f0b-3be17d41146b' {Boot into the new kernel: For CentOS dedicated machines, use the “reboot” command.

Fixed packages for CentOS

Fix on Ubuntu

Current patch will only address CVE-2017-5754 (aka Meltdown or Variant 3) for x86_64. A fix for “Spectre” variants will be available soon. Ubuntu 17.04 will not receive any fix.$ sudo apt-get update[ for Ubuntu 16.04 ]$ sudo apt-get install linux-generic[ for Ubuntu 14.04 ]$ sudo apt-get install linux-image-4.4.0-108-genericEdit /boot/grub/menu.lst such that default=0 is set, signifying that the latest kernel (mentioned at the top of the list of boot entries) should be booted. The first 8 uncommented lines of menu.lst should look like this -default=0timeout=10title vmlinuz-4.4.0-108-generic root (hd0,0) kernel /boot/vmlinuz-4.4.0-108-generic root=/dev/xvda console=hvc0 ro initrd /boot/initrd.img-4.4.0-108-genericBoot into the new kernel: For Ubuntu cloud/virtual machines, use the reboot buttonon the cloud console and for dedicated machines, use the “reboot” command.

Fixed packages for Ubuntu

PackageVersionSerieslinux4.4.0-108.131Xenial 16.04linux4.13.0-25.29Artful 17.10linux-aws4.4.0-1047.56Xenial 16.04linux-aws4.4.0-1009.9Trusty 14.04linux-azure4.13.0-1005.7Xenial 16.04linux-euclid4.4.0-9021.22Xenial 16.04linux-gcp4.13.0-1006.9Xenial 16.04linux-hwe-edge4.13.0-25.29~16.04.1Xenial 16.04linux-kvm4.4.0-1015.20Xenial 16.04linux-lts-xenial4.4.0-108.131~14.04.1Trusty 14.04linux-oem4.13.0-1015.16Xenial 16.04

Fix on Debian

CVE-2017-5754 (aka Meltdown or Variant 3) is fixed. "Spectre" mitigations are a work in progress.$ sudo apt-get update$ sudo apt-get install linux-image-amd64This will install the updated kernel release package linux-image-3.16.0-5-amd64 on Debian 8 and linux-image-4.9.0-5-amd64 on Debian 9.Boot into the new kernel: For Ubuntu cloud/virtual machines, use the reboot buttonon the cloud console and for dedicated machines, use the “reboot” command. With the new kernel version, you should see 3.16.51-3+deb8u1 for Debian 8 Jessie and 4.9.65-3+deb9u2 for Debian 9 -# uname -srvLinux 3.16.0-5-amd64 #1 SMP Debian 3.16.51-3+deb8u1 (2018-01-08)# uname -srvLinux 4.9.0-5-amd64 #1 SMP Debian 4.9.65-3+deb9u2 (2018-01-04)

Vulnerable and fixed packages for Debian

Source PackageReleaseVersionStatuslinux (PTS)wheezy3.2.78-1vulnerablewheezy (security)3.2.96-3fixedjessie3.16.51-2vulnerablejessie (security)3.16.51-3+deb8u1fixedstretch4.9.65-3vulnerablestretch (security)4.9.65-3+deb9u2fixedbuster4.14.7-1vulnerablesid4.14.12-2fixedThe information below is based on the following data on fixed versions.PackageTypeReleaseFixed VersionUrgencyOriginDebian Bugslinuxsource(unstable)4.14.12-1mediumlinuxsourcejessie3.16.51-3+deb8u1mediumDSA-4082-1linuxsourcestretch4.9.65-3+deb9u2mediumDSA-4078-1linuxsourcewheezy3.2.96-3mediumDLA-1232-1

Fix on Windows

Windows Server-based machines (physical or virtual) should get the Windows security updates that were released on January 3, 2018, and are available from Windows Update. The following updates are available:Operating system versionUpdate KBWindows Server, version 1709 (Server Core Installation)4056892Windows Server 20164056890Windows Server 2012 R24056898Windows Server 2012Not availableWindows Server 2008 R24056897Windows Server 2008Not availableUse these registry keys to enable the mitigations on the server and make sure that the system is restarted for the changes to take effect:Switch | Registry SettingsTo enable the fixreg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverride /t REG_DWORD /d 0 /freg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverrideMask /t REG_DWORD /d 3 /freg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization" /v MinVmVersionForCpuBasedMitigations /t REG_SZ /d "1.0" /fIf this is a Hyper-V host and the firmware updates have been applied: fully shutdown all Virtual Machines (to enable the firmware related mitigation for VMs you have to have the firmware update applied on the host before the VM starts).Restart the server for changes to take effect.To disable this fixreg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverride /t REG_DWORD /d 3 /freg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management" /v FeatureSettingsOverrideMask /t REG_DWORD /d 3 /fRestart the server for the changes to take effect.(There is no need to change MinVmVersionForCpuBasedMitigations.)Note For Hyper-V hosts, live migration between patched and unpatched hosts may fail: See for more information.

Verifying that protections are enabled

To help customers verify that protections have been enabled, Microsoft has published a PowerShell script that customers can run on their systems. Install and run the script by running the following commands:    PowerShell Verification using the PowerShell Gallery (Windows Server 2016 or WMF 5.0/5.1)    Install the PowerShell Module

   PS> Install-Module SpeculationControl

   Run the PowerShell module to validate the protections are enabled

   PS> # Save the current execution policy so it can be reset

   PS> $SaveExecutionPolicy = Get-ExecutionPolicy

   PS> Set-ExecutionPolicy RemoteSigned -Scope Currentuser

   PS> Import-Module SpeculationControl

   PS> Get-SpeculationControlSettings

   PS> # Reset the execution policy to the original state

   PS> Set-ExecutionPolicy $SaveExecutionPolicy -Scope Currentuser

   PowerShell Verification using a download from Technet (Earlier OS versions/Earlier WMF versions)    Install the PowerShell Module from Technet ScriptCenter.Go to to a local folder. Extract the contents to a local folder, for example C:\ADV180002    Run the PowerShell module to validate the protections are enabledStart PowerShell, then (using the example above), copy and run the following commands:

   PS> # Save the current execution policy so it can be reset

   PS> $SaveExecutionPolicy = Get-ExecutionPolicy

   PS> Set-ExecutionPolicy RemoteSigned -Scope Currentuser

   PS> CD C:\ADV180002\SpeculationControl

   PS> Import-Module .\SpeculationControl.psd1

   PS> Get-SpeculationControlSettings

   PS> # Reset the execution policy to the original state

   PS> Set-ExecutionPolicy $SaveExecutionPolicy -Scope Currentuser

The output of this PowerShell script will resemble the following. Enabled protections appear in the output as “True.”

PS C:\> Get-SpeculationControlSettings

Speculation control settings for CVE-2017-5715 [branch target injection]

Hardware support for branch target injection mitigation is present: True

Windows OS support for branch target injection mitigation is present: True

Windows OS support for branch target injection mitigation is enabled: True

Speculation control settings for CVE-2017-5754 [rogue data cache load]

Hardware requires kernel VA shadowing: True

Windows OS support for kernel VA shadow is present: True

Windows OS support for kernel VA shadow is enabled: True

Windows OS support for PCID optimization is enabled: True

Notes and References

Performance impact (Linux): Speculative execution is a performance optimization technique. Thus, these updates (both kernel and microcode) may result in workload-specific performance degradation. Therefore, some customers who feel confident that their systems are well protected by other means (such as physical isolation), may wish to disable some or all of these kernel patches. If the end user elects to enable the patches in the interest of security, this article provides a mechanism to conduct performance characterizations with and without the fixes enabled. Controlling the Performance Impact of Microcode and Security Patches for CVE-2017-5754 CVE-2017-5715 and CVE-2017-575]]>

Latest Blogs
This is a decorative image for: Top 12 skills a CEO should demand in a data scientist to hire in 2022
September 21, 2022

Top 12 skills a CEO should demand in a data scientist to hire in 2022

Two decades ago, data scientists didn’t exist. Sure, some people cleaned, organized and analyzed information — but the data science professionals we admire today stand at the head of a relatively new (and vaunted) career path.

It is certainly one of the most popular careers because it is in great demand and highly paid. With data being the primary fuel of industry and organization, company executives must now determine how to drive their company in this rapidly changing environment. Not only is a growth blueprint essential, but so are individuals who can put the blueprint into action. When most senior executives or human resource professionals think of data-driven employment, a data scientist is the first position that comes to mind.

In this blog, we will discuss the top 12 skills a CEO should demand if hiring a data scientist in 2022. 

  1. Problem-Solving and Critical Thinking

Finding a needle in a haystack is the goal of data science. You'll need a candidate who has a sharp problem-solving mind to figure out what goes where and why, and how it all works together. Thinking critically implies making well-informed, suitable judgments based on evidence and facts. That means leaving your own ideas at the door and putting your faith - within reason - in the evidence. 

Being objective in the analysis is more difficult than it appears at first. One is not born with the ability to think critically. It's a talent that, like any other, can be learned and mastered with time. Always look for a candidate who is prepared to ask questions and change his/her opinion, even if it means starting over.

  1. Teamwork 

If you go through job listings on sites like Indeed or LinkedIn, you'll notice one phrase that appears repeatedly: must work well in a team. Contrary to popular belief, most scientific communities, including those in data science, do not rely on a single exceptional mind to drive forward development. A team's cohesiveness and collaboration power are typically more significant than any one member's brilliance or originality. Your potential candidate will not contribute to success if s/he does not play well with others or believes that s/he does not require assistance from your colleagues. If anything, candidates' poisonous attitudes may cause stress, decreased levels of accomplishment, and failure on the team.

Harvard researchers revealed in 2015 that even "moderate" amounts of toxic employee conduct might increase attrition, lower employee morale, and reduce team effectiveness. Eighty percent of employees polled said they wasted time worrying about coworker incivility. Seventy-eight per cent claimed toxicity had reduced their dedication to their work, and 66 per cent said their performance had suffered as a result. The fact is that being a team player is significantly more productive and fulfilling than being a solo act. Look for a candidate with good cooperation abilities, and both you and your team will profit!

  1. Communication 

Capable data scientists must be able to communicate the conclusions they get from data. If your candidate lacks the ability to convert technical jargon into plain English, no matter how significant the results are, your audience will not grasp them. Communication is one of the most important skills a data scientist can learn — and one that many pros struggle with. 

One 2017 poll that tried to uncover the most common impediments that data scientists encountered at work discovered that the majority of them were non-technical. Among the top seven barriers were "explaining data science to others," "lack of management/financial support," and "results not utilised by decision-makers."

You fail if you can't communicate - therefore look for a candidate who knows how to interpret! And can break down complicated topics into digestible explanations; rather than giving a dry report.

  1. Business Intelligence 

Sure, a candidate can’t start teaching abstruse mathematical theory whenever you want — but can they explain how that theory can be applied to advance business? True, data scientists must have a strong grasp of their field as well as a solid foundation of technical abilities. However, if a candidate is required to use those abilities to advance a corporate purpose, they must also have some level of business acumen. Taking a few business classes will not only help them bridge the gap between their data scientist peers and business-minded bosses, but it will also help them advance the company's growth and their career as well. It may also assist them in better applying their technical talents to create useful strategic insights for your firm.

  1. Statistics and mathematics 

When it comes to the role of arithmetic in machine learning, perspectives are mixed. There is no disputing that college-level comprehension is necessary. Linear algebra and calculus should not sound like other languages. However, if you're looking for a candidate for an internship or a junior position, then they don't need to be a math guru. But if you are looking for a candidate to work as a researcher, then the candidate must have more than just a strong math background. After all, research propels the business ahead, and you won't be able to accomplish anything until you have a candidate with a thorough grasp of how things function.

The fact is that just because data science libraries enable data scientists to perform complex arithmetic without breaking a sweat doesn't mean they shouldn't be aware of what's going on behind the surface. Get a candidate with the fundamentals right.

  1. AI and Machine Learning 

Machine learning is an essential ability for any data scientist. It is used to create prediction models ranging from simple linear regression to cutting-edge picture synthesis using generative adversarial networks. When it comes to machine learning, there is a lot to look for in a potential candidate. Regression, decision trees, SVM, Naive Bayes, clustering, and other classic machine learning techniques (supervised and unsupervised) are available. Then there are neural networks, which include feed-forward, convolutional, recurrent, LSTM, GRU, and GAN. There's also reinforcement learning, but you get the idea - machine learning is a vast subject. 

  1. Skills in cloud and MLOps

To remain relevant to the industry's current demands, more than three out of five (61.7%) companies say they need data scientists with updated knowledge in cloud technologies, followed by MLOps (56.1%) and transformers (55%). Three out of every four professionals with ten or more years of experience are learning MLOps to expand their skill sets. Cloud technologies (71.7%) are being learned as a fundamental new talent by mid-career professionals with 3-6 years of experience, followed by MLOps (62.3%), transformers (60.4%), and others.

Professionals in retail, CPG, and e-commerce are more likely (73.7%) to learn cloud technology as a new skill. As much as 70% of BFSI personnel upskill in MLOps. Another 70% and 60% of pharma and health workers are interested in acquiring transformers and computer vision as fundamental skills.

So make sure you don't miss out on such a talent who can bring cloud and MLOps skills into your company. 

  1. Storytelling and Data Visualization 

Data visualisation is enjoyable. Of course, it depends on who you ask, but many people consider it the most gratifying aspect of data science and machine learning. Look for a candidate who is a visualisation specialist and understands how to show data based on business requirements, and also how to integrate visualisations so that they tell a story. It might be as easy as integrating a few plots in a PDF report or as sophisticated as creating an interactive dashboard suited to the client's requirements.

The data visualisation tools utilised are determined by the language. Plotly, which works with R, Python, and JavaScript, may be the best option if you need a candidate for searching for a cross-platform interactive solution. Consider Tableau and PowerBI when you need a candidate for viewing data using a BI tool. 

Figure: Use of Data Visualization tools. 

  1. Programming 

Without programming, there is no data science. How else would you give the computer instructions? All data scientists must be familiar with writing code, most likely in Python, R, or SQL these days. The breadth of what a candidate will perform with programming languages differs from that of traditional programming professions in that they’ll lean toward specific libraries for data analysis, visualisation, and machine learning. 

Still, thinking like a coder entails more than just understanding how to solve issues. If there is one thing that data science sees a lot of, it is issues that need to be solved. But nothing is worse than understanding how to fix an issue but failing to transform it into long-lasting, production-ready code.

Out of the host of programming languages, 90% CEOs hire data science specialists who are specialists in Python as their preference for statistical modelling. Beyond that, the use of SQL (68.4%) is highest in retail, CPG, and ecommerce, followed by IT at 62.9%. R is the most widely used programming language if you operate in the pharma and healthcare business, with three in five (60%) data scientists reporting using it for statistical modelling.

  1. Mining Social Media 

The process of extracting data from social media sites such as Facebook, Twitter, and Instagram is referred to as social media mining. Skilled data scientists may utilise this data to uncover relevant trends and extract insights that a company can then use to gain a better knowledge of its target audience's preferences and social media actions. You need data scientists well versed with this type of study as it is essential for building a high-level social media marketing plan for businesses. Given the importance of social media in day-to-day business and its long-term viability, hiring data scientists with social media data mining abilities is an excellent strategy for company growth.

  1. Data manipulation 

After collecting data from various sources, a data scientist will almost surely come across some shoddy data that has to be cleaned up. You need to hire a candidate that knows what Data wrangling is. How to use it for the rectification of data faults such as missing information, string formatting, and date formatting. 

  1. Deployment of a Model 

What is the use of a ship if it cannot float? Non-technical users should not be expected to connect to specialised virtual machines or Jupyter notebooks only to check how your model operates. As a result, the ability to deploy a model is frequently required for data scientist employment.

The easiest solution is to establish an API around your model and deploy it as any other application — hosted on a virtual machine operating in the cloud. Things get harder if you wish to deploy models to mobile, as mobile devices are inferior when it comes to hardware. 

If speed is critical, sending an API call and depending on an Internet connection isn't the best option. Consider distributing the model directly to the mobile app. Machine learning developers may not know how to design mobile apps, but they may examine lighter network topologies that will have reduced inference time on lower-end hardware.

Consider hiring a candidate who is well versed with all the things discussed above related to deploying a model. 


And there you have it: the top twelve talents skills a CEO must look for while hiring a data scientist. Keep in mind that skill levels or talents themselves may differ from one firm to the next. Some data science jobs are more focused on databases and programming, while others are more focused on arithmetic. Nonetheless, we believe that these 12 data science skills are essential for your potential candidate in 2022.

This is a decorative image for: Towards Complete Icon Labelling in Mobile Applications
September 21, 2022

Towards Complete Icon Labeling in Mobile Applications

Why is Icon Labeling Important?

Icon labeling projects aim to create a machine learning algorithm that can automatically label icons in mobile applications. The algorithm is generally trained on a dataset of labeled images and learns to recognize the objects in the pictures. Labeling icons is a tedious task and often requires human intervention.

Thus, automating this process by training an algorithm on a labeled image dataset can pave the way for complete icon labeling. This article will walk you through labeling icons using machine learning. Icons may seem like a small part of your app, but they're critical for branding and user experience. Icons need to be labeled by hand, which is time-consuming and tedious.

It isn't easy to keep up with the volume of new icons on mobile phones, and keeping the icons organized takes a lot of effort. The wrong icon can ruin your app's design and make it difficult for users to use. With any icon labeling project, labeling icons is easy. Your database will be automatically and consistently labeled by Artificial Intelligence that recognizes objects in images.

How to Label Icons Effectively?

Prepare your data set. It should include the icon's name, a short description, and an image of the icon. You can use any file type uploaded to any storage or drive.

Next, you will need to create a project in the platform and enable billing if it has not been done already. Then you can create a new dataset by specifying a dataset ID and name.

The use of labeling icons in UI design has been around for many years. The most popular use case is to offer users an indication of what they can do on a particular screen. You can do so by adding labels to the icons.

Icons often indicate the user's action to complete a task (e.g., save, delete, etc.). However, this could be problematic for people with disabilities or who cannot understand or read English fluently due to language and communication barriers.

Labeling icons is complex, especially when the icon is not well-known. We propose a novel method for labeling icons with conversational agents and chatbots. Machine Learning techniques can help generate a set of labeled examples for a conversational agent or chatbot training.

Tips for using icons in your app

Labels are the most critical component of an icon, as they communicate the meaning to users. Designers should keep their icons simple and schematic and include a visible text label to make them good touch targets.

Icon designers also need to be careful when designing icons. Designers should keep their icons simple and schematic, include a visible text label and make them good touch targets. Labels are the most crucial component of an icon as they communicate meaning to users.

Icons should be simple and schematic with a clear visible text label that communicates what the icon means to users. Icons are also suitable for touching targets for screen readers, so designers must consider this when designing them.

Icon labels are an essential feature that can make or break an icon. Designers are often designing icons with less-than-perfect or downright nasty labels. Terrible labels can lead to misinterpretation and confusion, leading to lost business or a tarnished reputation. Labels are not just crucial for designers; they're critical to users.

The label conveys the meaning of a symbol, so it should be simple, visible, and easy for interaction purposes. If designers ignore these principles, icons will become meaningless, unhelpful, and challenging to navigate. Designers must create good touch targets that are easily recognizable. After all, it's about bringing users the best.


Iconography is the basis of every UI design. Designers need to understand how it shapes an interface’s usability. Every icon in an interface serves a purpose. When implemented carefully and in the correct manner, icons can help users navigate through the workflow. It's good to be a part of this cutting-edge iconography which can help you further push the boundaries of Deep Learning and expand your understanding of recognizing icon types.

This is a decorative image for: Prompt-based Learning can Make Language Models More Capable
September 21, 2022

Prompt-based Learning can Make Language Models More Capable

Artificial Intelligence promotes supervised learning and builds a system that catches the fundamental relationships between various inputs and outputs. In this way, Natural Language Processing (NLP) plays a very integral part in linking the input with the output. It was hard to retrieve the knowledge and information from the important datasets in the NLP technology. But with the development of neural network models, it has become very convenient to retrieve information from the datasets. 

Learn the neural network models to better understand and deal with the features of NLP technology. The new training that the researchers are trying to provide to the learners of neural networks to understand NLP models is popularly known as Prompt-based Learning. This means this type of learning will not be supervised anymore. It will however be used to solve various tasks but what is the correct prompt for a particular language model is a task in itself.

What is Prompt-based Learning?

Machine Learning researchers and engineers need a proper strategy through which they can train the large language models. Such training will enable the large language model (LLM) to deal with different tasks without the need for a training session each time. 

There are certain traditional languages which are being popularly used but they are required to be retrained every time there is a change in the functioning. BERT and GPT-3 are the two best examples of such languages. Search languages need to be fine-tuned every time. As a result, a lot of tasks have to be done as retraining to get the work done. Prompt-based learning eliminates all these needs.

Advantages of Prompt-based Learning

Prompt-based learning offers several benefits when compared to the traditional way of fine-tuning methods. The benefits of prompt-based learning are enumerated as follows:

  • Prompt-based learning works exceptionally well in the case of labelled data that are in small amounts.
  • It helps you to achieve strong and accurate results by doing comparatively less work.
  • Prompt-based learning also helps you to achieve the standard of efficiency and the process becomes cost-effective.
  • These AI models require less energy consumption and thus, Prompt-based learning saves energy consumption.

These advantages play a very big part in why various companies choose prompt-based learning to train their NLP technologies.

Challenges of Prompt-based Learning

Despite a ton of advantages that prompt-based learning has in store, it has certain disadvantages as well. These disadvantages come in the form of challenges that can be stated as follows:

  • It is difficult to design prompt designs that will be effective.
  • Finding an accurate combination of prompt templates and the answers to them is another difficult task at hand.
  • Prompt-based learning has been implied in a few selected domains only. It needs to be explored more.
  • If the person who is handling the AI models does not understand the proper working of Prompt-based learning, then all the hard work will go in vain.
  • A constant trial and error approach is required to keep a check on whether a particular strategy is yielding fruitful results or not.

The discipline of Artificial Intelligence is ever-changing and ever-growing. It is one of the most dynamic disciplines that have come into existence. Also, know about why good conversation matters and how AI can help so that you get deeply rooted in this idea. Prompt-based learning is breezing the gap between the traditional and new age data models.

Build on the most powerful infrastructure cloud

A vector illustration of a tech city using latest cloud technologies & infrastructure