Navigating the Global AI, Cloud and Data Policy Landscape: Key Insights

December 13, 2023

Introduction 

Since the rapid emergence of AI technologies in 2023, the AI regulatory landscape has started to formalize, marked by a global shift towards more structured and comprehensive policies. This evolution reflects an increased recognition of the profound impact AI technologies are having on various facets of society, economy, and governance.

Countries and international organizations are actively working to establish regulatory frameworks that balance the need for innovation with ethical considerations, data privacy, and security. These  frameworks are essential to ensure that AI development and deployment is conducted responsibly, and that they align with data laws while being able to keep up with the rapidly advancing landscape. 

As these policies take shape, they will define the boundaries and responsibilities of AI developers, businesses, users, and regulatory authorities, shaping the future of AI integration into our daily lives and industries. 

Below, we have outlined some of the key learnings we have had through closed-door interactions with some of the top stakeholders in this domain. 

The EU AI Act

On 9 December 2023, the European Union reached a landmark agreement, after three days of marathon talks, on a comprehensive legislation to regulate artificial intelligence. This EU AI Act is being hailed as a global first in setting the standard for this rapidly advancing technology. 

The AI Act was originally proposed by the EU's executive arm in 2021, and has since gained significant momentum following the widespread impact of AI technologies. This legislation represents a significant step forward in regulating AI development and usage within the EU, aiming to ensure it is developed and used responsibly, ethically, and human-centrically. 

Key Features

Central to the EU's AI policy is the debate around the regulation of foundational AI models. Key member states, such as France, Germany, and Italy, pushed for a balanced approach that doesn’t stifle innovation while ensuring a certain code of ethics. 

Key aspects of the EU AI Act include:

  • Scope and Objectives: The AI Act aims to ensure the safe and ethical use of AI technologies, focusing on safeguarding the rights of people and businesses. It establishes a unique legal framework for the development of AI that can be trusted.
  • Risk-Based Approach: AI systems should be categorized based on their potential risk, with stricter requirements for high-risk systems such as those used in healthcare, law enforcement, and critical infrastructure. The bottomline is: the higher the risk, the stricter the rules. AI systems with minimal risk are only required to adhere to basic transparency rules. For instance, they must reveal if the content is AI-generated, allowing users to make more informed decisions about its further use.
  • Prohibited Practices: Certain AI applications, like social scoring by governments and manipulative toys, will be banned entirely. The Act also prohibits certain AI applications, such as manipulating cognitive behavior, indiscriminate collection of facial images, emotion detection in workplaces and schools, social scoring systems, biometric categorization for sensitive data like sexual orientation or religious beliefs, and certain predictive policing methods.
  • Foundational Models: The agreement sets specific rules for foundational models, which are large, multi-functional AI systems. These include obligations for transparency before market placement. A more stringent regime applies to 'high impact' foundational models. The Act also contains provisions to govern general purpose AI systems (GPAI), where GPAI technology is subsequently integrated into another high-risk system. 
  • Regulation on High-Risk Applications: The Act requires tech companies operating in the EU to disclose data used in training AI systems and to conduct a thorough testing of products, especially those used in high-risk applications such as self-driving vehicles or healthcare.
  • Transparency: The EU AI Act mandates that before launching high-risk AI systems, deployers must assess their impact on fundamental rights of citizens. Public entities using such AI systems are required to register on the EU's high-risk AI database. Additionally, users of emotion recognition systems are obligated to inform individuals when they are being monitored by these systems.
  • Sandbox Model: The agreement specifies that AI regulatory sandboxes will be created, which are designed for controlled development, testing, and validation of AI innovations. This will permit AI system testing under real conditions with certain safeguards. To support smaller companies and reduce their administrative load, the agreement outlines specific actions and provides limited, well-defined exceptions.
  • Penalties for Non-Compliance: Penalties for AI Act breaches are based on the higher of either a set percentage of the company's global annual turnover from the previous year, or a fixed sum. The penalties are structured as follows: €35 million or 7% for prohibited AI uses, €15 million or 3% for breaching AI Act obligations, and €7.5 million or 1.5% for providing false information. The provisional agreement also includes scaled-down fines for SMEs and start-ups for AI Act violations. 
  • Implementation Timeline: The final legislation will be worked out in the coming days, with the expectation that it could come into force by 2025.

The EU's AI Act is seen as setting a global benchmark for AI regulation, influencing other countries and regions. The United States, India, China, and other countries are also looking at similar regulations to balance the benefits of AI with the need for regulations. However, a significant feature of the Act is to prevent overregulation, which could hinder the growth of European AI companies like Mistral AI and Aleph Alpha thereby allowing for dominance by US companies.

The EU AI Act also specifies that its regulations will not extend to jurisdictions which are beyond the reach of EU laws and they won't interfere with member states' national security roles. It excludes AI systems used solely for military or defense purposes. Additionally, the act doesn't apply to AI used exclusively for research and innovation, or by individuals for non-professional purposes.

NIST and Evolving AI & Cloud Framework in the US

The National Institute of Standards and Technology (NIST) in the US has developed a framework for guiding the development and deployment of trustworthy AI and cloud-based systems. This framework emphasizes a scientific approach to policy drafting, focusing on evidence-based decision-making and stakeholder engagement. 

Given its comprehensive methodology, the NIST framework has garnered significant attention and its principles have been incorporated into several key initiatives, including the US Executive Order on AI.

The NIST framework consists of five core components:

  • Trustworthiness Principles: These principles outline the fundamental characteristics of trustworthy AI and cloud-based systems, such as fairness, accountability, transparency, and explainability.
  • Risk Management Framework: This framework provides a structured approach to identifying, assessing, and mitigating risks associated with AI and cloud-based systems.
  • Technical Standards and Guidelines: These standards and guidelines provide technical specifications for implementing trustworthy AI and cloud-based systems.
  • Conformity Assessment: This process helps to ensure that AI and cloud-based systems comply with relevant standards and regulations.
  • Workforce Development: This component focuses on developing the skills and knowledge needed to design, develop, deploy, and operate trustworthy AI and cloud-based systems.

The framework is intended to be a living document, evolving as technology and societal needs change. Its scientific approach and focus on stakeholder engagement make it a valuable tool for policymakers and industry leaders seeking to ensure the responsible development and deployment of AI and cloud-based technologies.

In fact, the US Executive Order on AI, signed by President Biden in October 2023, incorporates several key elements of the NIST framework. A similar approach is likely to shape the future of AI policy in other countries around the world as well. 

US Executive Order on AI 

In 2023, the US administration, under President Joe Biden, issued a comprehensive executive order to establish a new framework for AI governance, marking a shift towards greater transparency and safety in AI development. 

This executive order is the most extensive set of AI rules and guidelines issued by the US government till date. Primarily, it mandates increased transparency from AI companies about their models, particularly in how they work, and establishes new standards for labeling AI-generated content. The overarching goal is to enhance AI safety and security, including a requirement for developers to share safety test results for new AI models with the US government, especially if these models could pose a risk to national security​​.

This order, titled ‘Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’, has several key features:

Trustworthy AI

  • Transparency and Explainability: Developers will be encouraged to make their AI systems more transparent and understandable to users.
  • Fairness and Non-Discrimination: The order aims to mitigate potential bias and discrimination in AI systems.
  • Privacy and Security: The order emphasizes the importance of protecting user privacy and data security.

Mitigating AI Risks

  • Development of Safety Standards: The order calls for the development of voluntary standards and best practices for the safe development and deployment of AI systems.
  • Addressing Algorithmic Bias: It directs federal agencies to assess and address potential biases in their AI systems.
  • Promoting Public Awareness: It encourages public education and awareness about AI risks and benefits.

Promoting Innovation

  • Investment in Research and Development: The order calls for increased federal investment in AI research and development.
  • Fostering Talent: It aims to attract and develop a skilled AI workforce.
  • Facilitating Collaboration: It encourages collaboration between government, industry, and academia on AI initiatives.

Global Leadership

  • Promoting International Cooperation: The order encourages the US to collaborate with other countries on AI governance.
  • Protecting National Security and Economic Interests: It aims to ensure that AI is developed and used in a way that protects national security and economic interests.

Key Initiatives

  • Federal AI Research and Development Initiative: This initiative aims to accelerate AI research and development across the federal government.
  • National AI Advisory Committee: This committee will provide expert advice to the government on AI policy and strategy.
  • National AI Workforce Initiative: This initiative aims to address the growing demand for skilled AI workers.

Overall, the Executive Order represents a significant step forward in the United States' approach to AI governance. It sets ambitious goals for promoting responsible AI development and use, while also acknowledging the challenges and risks associated with this technology.

India’s Stance on AI & Cloud

India's approach to AI regulation is undergoing a significant transformation with the evolution of the Digital India Act (DIA), which is set to replace the IT Act of 2000. 

This new act is currently in its drafting stage, but it is already becoming clear that India is likely to adopt a guidelines based approach, rather than a strict implementation framework for AI development. This aligns with the need for flexibility in adapting to evolving regulatory frameworks.

As I write this article, India is hosting the inaugural ceremony of the Global Partnership on Artificial Intelligence (GPAI) Summit 2023, in Delhi, with participation from 29 countries. The Prime Minister of India, Narendra Modi, has announced the launch of an Artificial Intelligence Mission to promote the use of AI in sectors such as agriculture, healthcare and education, with a national AI portal playing a key role. He’s raised questions around deep fakes, cyber security and data theft, emphasizing on transparency in the use of AI. India is a founding member of the GPAI. 

Current Status

India currently doesn’t distinguish between various cloud service models like IAAS, PAAS, and SAAS. Singapore is soon to release a policy segregating each service, and it will be important to track the same. 

The Telecom Regulatory Authority of India (TRAI) proposed consolidating control over cloud and telecom policies under a single regulatory body, suggesting the integration of cloud policies under the Department of Telecommunications (DoT). However, this proposal was rejected, and the response indicated that the regulation of cloud services would remain under the purview of the Ministry of Electronics and Information Technology (MeitY). 

The MeitY Secretary is closely overseeing forthcoming policies, with specific focus on policies being introduced by Asian countries. This is noteworthy as the Digital India Act is anticipated to encompass a combination of these policies.

Evolving Approach: Glocalizing Data for AI

Looking at the way frameworks are evolving globally, we are moving towards an approach that strikes a balance between global integration and local compliance. This acknowledges the need for unrestricted data flow across borders, while keeping checks and balances in place. 

A key aspect of this is the selective enforcement and policing of data. Instead of imposing universally rigid regulations, there is a growing consensus for tailored enforcement measures based on specific contexts and needs. The goal is to strike a balance between securing data, while pushing for innovation. This will allow for adaptability in diverse regulatory frameworks while addressing concerns like privacy, security, and the ethical use of data.

Another critical consideration is the concept of geofencing. Geofencing involves creating virtual boundaries around geographical areas to regulate data based on location. 

This concept aligns with the strategy of selective enforcement, where data governance measures are customized according to specific geographic regions. This approach helps comply with domestic privacy laws within states or countries and adds an additional layer of granularity to data governance. 

The Future of AI Policy-Making

The global AI regulatory landscape is rapidly evolving, characterized by a shift towards comprehensive and structured policies. Governments and international organizations are taking proactive steps to ensure responsible AI development and deployment, aiming to balance innovation with ethical considerations, data privacy, and security.

Key initiatives to track are the NIST Framework, US Executive Order on AI, EU AI Policy, and India's Digital India Act, all of which demonstrate a commitment to establishing responsible AI governance frameworks. These frameworks often share common principles, including:

  • Trustworthiness: Ensuring AI systems are fair, transparent, explainable, and accountable.
  • Risk Management: Mitigating potential risks associated with AI technologies.
  • Human Decision-Making: Emphasizing the importance of human control and decision-making.
  • Data Governance: Protecting user privacy and security through data regulations.

We are witnessing a trend towards ‘glocalizing’ data for AI, where frameworks seek to balance global data flow with local compliance needs. This involves selective enforcement, geofencing, and adapting regulations to specific contexts. 

Overall, these developments point towards a global convergence towards responsible AI, shaping the future of AI integration into our lives and industries. It is important for all cloud players, developers, startups and enterprises to track the evolving regulatory landscape in order to build AI in an ethical and responsible way. 

Latest Blogs
This is a decorative image for: A Complete Guide To Customer Acquisition For Startups
October 18, 2022

A Complete Guide To Customer Acquisition For Startups

Any business is enlivened by its customers. Therefore, a strategy to constantly bring in new clients is an ongoing requirement. In this regard, having a proper customer acquisition strategy can be of great importance.

So, if you are just starting your business, or planning to expand it, read on to learn more about this concept.

The problem with customer acquisition

As an organization, when working in a diverse and competitive market like India, you need to have a well-defined customer acquisition strategy to attain success. However, this is where most startups struggle. Now, you may have a great product or service, but if you are not in the right place targeting the right demographic, you are not likely to get the results you want.

To resolve this, typically, companies invest, but if that is not channelized properly, it will be futile.

So, the best way out of this dilemma is to have a clear customer acquisition strategy in place.

How can you create the ideal customer acquisition strategy for your business?

  • Define what your goals are

You need to define your goals so that you can meet the revenue expectations you have for the current fiscal year. You need to find a value for the metrics –

  • MRR – Monthly recurring revenue, which tells you all the income that can be generated from all your income channels.
  • CLV – Customer lifetime value tells you how much a customer is willing to spend on your business during your mutual relationship duration.  
  • CAC – Customer acquisition costs, which tells how much your organization needs to spend to acquire customers constantly.
  • Churn rate – It tells you the rate at which customers stop doing business.

All these metrics tell you how well you will be able to grow your business and revenue.

  • Identify your ideal customers

You need to understand who your current customers are and who your target customers are. Once you are aware of your customer base, you can focus your energies in that direction and get the maximum sale of your products or services. You can also understand what your customers require through various analytics and markers and address them to leverage your products/services towards them.

  • Choose your channels for customer acquisition

How will you acquire customers who will eventually tell at what scale and at what rate you need to expand your business? You could market and sell your products on social media channels like Instagram, Facebook and YouTube, or invest in paid marketing like Google Ads. You need to develop a unique strategy for each of these channels. 

  • Communicate with your customers

If you know exactly what your customers have in mind, then you will be able to develop your customer strategy with a clear perspective in mind. You can do it through surveys or customer opinion forms, email contact forms, blog posts and social media posts. After that, you just need to measure the analytics, clearly understand the insights, and improve your strategy accordingly.

Combining these strategies with your long-term business plan will bring results. However, there will be challenges on the way, where you need to adapt as per the requirements to make the most of it. At the same time, introducing new technologies like AI and ML can also solve such issues easily. To learn more about the use of AI and ML and how they are transforming businesses, keep referring to the blog section of E2E Networks.

Reference Links

https://www.helpscout.com/customer-acquisition/

https://www.cloudways.com/blog/customer-acquisition-strategy-for-startups/

https://blog.hubspot.com/service/customer-acquisition

This is a decorative image for: Constructing 3D objects through Deep Learning
October 18, 2022

Image-based 3D Object Reconstruction State-of-the-Art and trends in the Deep Learning Era

3D reconstruction is one of the most complex issues of deep learning systems. There have been multiple types of research in this field, and almost everything has been tried on it — computer vision, computer graphics and machine learning, but to no avail. However, that has resulted in CNN or convolutional neural networks foraying into this field, which has yielded some success.

The Main Objective of the 3D Object Reconstruction

Developing this deep learning technology aims to infer the shape of 3D objects from 2D images. So, to conduct the experiment, you need the following:

  • Highly calibrated cameras that take a photograph of the image from various angles.
  • Large training datasets can predict the geometry of the object whose 3D image reconstruction needs to be done. These datasets can be collected from a database of images, or they can be collected and sampled from a video.

By using the apparatus and datasets, you will be able to proceed with the 3D reconstruction from 2D datasets.

State-of-the-art Technology Used by the Datasets for the Reconstruction of 3D Objects

The technology used for this purpose needs to stick to the following parameters:

  • Input

Training with the help of one or multiple RGB images, where the segmentation of the 3D ground truth needs to be done. It could be one image, multiple images or even a video stream.

The testing will also be done on the same parameters, which will also help to create a uniform, cluttered background, or both.

  • Output

The volumetric output will be done in both high and low resolution, and the surface output will be generated through parameterisation, template deformation and point cloud. Moreover, the direct and intermediate outputs will be calculated this way.

  • Network architecture used

The architecture used in training is 3D-VAE-GAN, which has an encoder and a decoder, with TL-Net and conditional GAN. At the same time, the testing architecture is 3D-VAE, which has an encoder and a decoder.

  • Training used

The degree of supervision used in 2D vs 3D supervision, weak supervision along with loss functions have to be included in this system. The training procedure is adversarial training with joint 2D and 3D embeddings. Also, the network architecture is extremely important for the speed and processing quality of the output images.

  • Practical applications and use cases

Volumetric representations and surface representations can do the reconstruction. Powerful computer systems need to be used for reconstruction.

Given below are some of the places where 3D Object Reconstruction Deep Learning Systems are used:

  • 3D reconstruction technology can be used in the Police Department for drawing the faces of criminals whose images have been procured from a crime site where their faces are not completely revealed.
  • It can be used for re-modelling ruins at ancient architectural sites. The rubble or the debris stubs of structures can be used to recreate the entire building structure and get an idea of how it looked in the past.
  • They can be used in plastic surgery where the organs, face, limbs or any other portion of the body has been damaged and needs to be rebuilt.
  • It can be used in airport security, where concealed shapes can be used for guessing whether a person is armed or is carrying explosives or not.
  • It can also help in completing DNA sequences.

So, if you are planning to implement this technology, then you can rent the required infrastructure from E2E Networks and avoid investing in it. And if you plan to learn more about such topics, then keep a tab on the blog section of the website

Reference Links

https://tongtianta.site/paper/68922

https://github.com/natowi/3D-Reconstruction-with-Deep-Learning-Methods

This is a decorative image for: Comprehensive Guide to Deep Q-Learning for Data Science Enthusiasts
October 18, 2022

A Comprehensive Guide To Deep Q-Learning For Data Science Enthusiasts

For all data science enthusiasts who would love to dig deep, we have composed a write-up about Q-Learning specifically for you all. Deep Q-Learning and Reinforcement learning (RL) are extremely popular these days. These two data science methodologies use Python libraries like TensorFlow 2 and openAI’s Gym environment.

So, read on to know more.

What is Deep Q-Learning?

Deep Q-Learning utilizes the principles of Q-learning, but instead of using the Q-table, it uses the neural network. The algorithm of deep Q-Learning uses the states as input and the optimal Q-value of every action possible as the output. The agent gathers and stores all the previous experiences in the memory of the trained tuple in the following order:

State> Next state> Action> Reward

The neural network training stability increases using a random batch of previous data by using the experience replay. Experience replay also means the previous experiences stocking, and the target network uses it for training and calculation of the Q-network and the predicted Q-Value. This neural network uses openAI Gym, which is provided by taxi-v3 environments.

Now, any understanding of Deep Q-Learning   is incomplete without talking about Reinforcement Learning.

What is Reinforcement Learning?

Reinforcement is a subsection of ML. This part of ML is related to the action in which an environmental agent participates in a reward-based system and uses Reinforcement Learning to maximize the rewards. Reinforcement Learning is a different technique from unsupervised learning or supervised learning because it does not require a supervised input/output pair. The number of corrections is also less, so it is a highly efficient technique.

Now, the understanding of reinforcement learning is incomplete without knowing about Markov Decision Process (MDP). MDP is involved with each state that has been presented in the results of the environment, derived from the state previously there. The information which composes both states is gathered and transferred to the decision process. The task of the chosen agent is to maximize the awards. The MDP optimizes the actions and helps construct the optimal policy.

For developing the MDP, you need to follow the Q-Learning Algorithm, which is an extremely important part of data science and machine learning.

What is Q-Learning Algorithm?

The process of Q-Learning is important for understanding the data from scratch. It involves defining the parameters, choosing the actions from the current state and also choosing the actions from the previous state and then developing a Q-table for maximizing the results or output rewards.

The 4 steps that are involved in Q-Learning:

  1. Initializing parameters – The RL (reinforcement learning) model learns the set of actions that the agent requires in the state, environment and time.
  2. Identifying current state – The model stores the prior records for optimal action definition for maximizing the results. For acting in the present state, the state needs to be identified and perform an action combination for it.
  3. Choosing the optimal action set and gaining the relevant experience – A Q-table is generated from the data with a set of specific states and actions, and the weight of this data is calculated for updating the Q-Table to the following step.
  4. Updating Q-table rewards and next state determination – After the relevant experience is gained and agents start getting environmental records. The reward amplitude helps to present the subsequent step.  

In case the Q-table size is huge, then the generation of the model is a time-consuming process. This situation requires Deep Q-learning.

Hopefully, this write-up has provided an outline of Deep Q-Learning and its related concepts. If you wish to learn more about such topics, then keep a tab on the blog section of the E2E Networks website.

Reference Links

https://analyticsindiamag.com/comprehensive-guide-to-deep-q-learning-for-data-science-enthusiasts/

https://medium.com/@jereminuerofficial/a-comprehensive-guide-to-deep-q-learning-8aeed632f52f

This is a decorative image for: GAUDI: A Neural Architect for Immersive 3D Scene Generation
October 13, 2022

GAUDI: A Neural Architect for Immersive 3D Scene Generation

The evolution of artificial intelligence in the past decade has been staggering, and now the focus is shifting towards AI and ML systems to understand and generate 3D spaces. As a result, there has been extensive research on manipulating 3D generative models. In this regard, Apple’s AI and ML scientists have developed GAUDI, a method specifically for this job.

An introduction to GAUDI

The GAUDI 3D immersive technique founders named it after the famous architect Antoni Gaudi. This AI model takes the help of a camera pose decoder, which enables it to guess the possible camera angles of a scene. Hence, the decoder then makes it possible to predict the 3D canvas from almost every angle.

What does GAUDI do?

GAUDI can perform multiple functions –

  • The extensions of these generative models have a tremendous effect on ML and computer vision. Pragmatically, such models are highly useful. They are applied in model-based reinforcement learning and planning world models, SLAM is s, or 3D content creation.
  • Generative modelling for 3D objects has been used for generating scenes using graf, pigan, and gsn, which incorporate a GAN (Generative Adversarial Network). The generator codes radiance fields exclusively. Using the 3D space in the scene along with the camera pose generates the 3D image from that point. This point has a density scalar and RGB value for that specific point in 3D space. This can be done from a 2D camera view. It does this by imposing 3D datasets on those 2D shots. It isolates various objects and scenes and combines them to render a new scene altogether.
  • GAUDI also removes GANs pathologies like mode collapse and improved GAN.
  • GAUDI also uses this to train data on a canonical coordinate system. You can compare it by looking at the trajectory of the scenes.

How is GAUDI applied to the content?

The steps of application for GAUDI have been given below:

  • Each trajectory is created, which consists of a sequence of posed images (These images are from a 3D scene) encoded into a latent representation. This representation which has a radiance field or what we refer to as the 3D scene and the camera path is created in a disentangled way. The results are interpreted as free parameters. The problem is optimized by and formulation of a reconstruction objective.
  • This simple training process is then scaled to trajectories, thousands of them creating a large number of views. The model samples the radiance fields totally from the previous distribution that the model has learned.
  • The scenes are thus synthesized by interpolation within the hidden space.
  • The scaling of 3D scenes generates many scenes that contain thousands of images. During training, there is no issue related to canonical orientation or mode collapse.
  • A novel de-noising optimization technique is used to find hidden representations that collaborate in modelling the camera poses and the radiance field to create multiple datasets with state-of-the-art performance in generating 3D scenes by building a setup that uses images and text.

To conclude, GAUDI has more capabilities and can also be used for sampling various images and video datasets. Furthermore, this will make a foray into AR (augmented reality) and VR (virtual reality). With GAUDI in hand, the sky is only the limit in the field of media creation. So, if you enjoy reading about the latest development in the field of AI and ML, then keep a tab on the blog section of the E2E Networks website.

Reference Links

https://www.researchgate.net/publication/362323995_GAUDI_A_Neural_Architect_for_Immersive_3D_Scene_Generation

https://www.technology.org/2022/07/31/gaudi-a-neural-architect-for-immersive-3d-scene-generation/ 

https://www.patentlyapple.com/2022/08/apple-has-unveiled-gaudi-a-neural-architect-for-immersive-3d-scene-generation.html

Build on the most powerful infrastructure cloud

A vector illustration of a tech city using latest cloud technologies & infrastructure