Monday 27 March 2023

Adobe Firefly

Adobe Firefly is a new family of productive artificial intelligence-based models. The primary focus of Firefly is creating pictures and text effects. Whether it is power, ease, speed or precision — everything can be brought directly into Creative Cloud, Document Cloud, Experience Cloud and Adobe Express workflows by this model. You should know that it is a part of new Adobe Sensei productive AI services across Adobe's clouds' series.

There is a long history of AI innovation behind Adobe. It offers a lot of intelligent capabilities via Adobe sensei into apps on which millions of people rely. Now, due to the Neural filters in Photoshop, content aware fill in after effects, attribution AI in Adobe experience platform along with the liquid mode in acrobat, Adobe customers can do various tasks like creating content, editing, measuring, optimising, and reviewing content with speed, power, ease and precision. Hence, these are following the features allowing customers to do so:

Let's explore the features of Adobe firefly.

Firefly Features:

Productive AI for makers: 

The 1st model's beta version enables you to use everyday language so that you can create exceptional new content. It comes with the potential to offer an excellent performance.

Unlimited creative choices: 

This new model now features context-aware image generation, the result of which you can add any new idea to your composition that you are thinking.

Instant productive building blocks: 

Have you ever imagined generating brushes, custom vectors, and textures from a sketch? You will be glad to know that it is possible now. You can edit your creativity with the help of tools you are familiar with.

Astound video edits: 

The model allows you to change the atmosphere, mood or weather. This model's exceptional quality of text-based video editing lets you describe the look you want. Thus, changing colours & settings is possible to match.

Distinctive content creation for everyone: 

With this model, you can make unique posters, banners, social posts, etc., using an easy text prompt. Besides, you can upload a mood board for making original, customizable content.

Future-forward 3D: 

In future, it is expected that Adobe will allow Firefly to get involved in fantastic works with 3D. For instance, you can turn simple 3D compositions into photorealistic pictures and make new 3D object variations & styles.

Creators get the priority: 

Adobe is committed to responsibly developing creative, generative AI with creators at the center. Adobe's target is to offer the creators every benefit creatively and practically. The more Firefly evolves, Adobe will work continuously with the creative community to support technology so that it can improve the creative procedure.

Enhance the creative procedure: 

The model mainly wants to help users so that they can expand upon their natural creativity. Firefly is an embedded model inside Adobe products. That's why it might provide productive artificial intelligence based tools which people can use for workflows, use cases, and creative needs.

Practical benefits to the makers: 

As soon as the model is out of its beta stage, makers can use content produced in the model commercially. When the model evolves even more, Adobe is expected to provide several Firefly models to the makers for various uses.

Set the standard for responsibility: 

CAI, or Content Authenticity Initiative, was set up by Adobe to create a global standard for trusted digital content attribution. Adobe uses the CAI's open-source tools to push for open industry standards. These free tools are developed actively via the nonprofit Coalition for C2PA or Content Provenance and Authenticity. Adobe is also working toward a universal "Do Not Train" Content Credentials tag which will remain connected to the content wherever it is used, published or stored.

New superpowers to the creators: 

This model gives superpowers to the creators. Therefore, they work at an imaginative speed. If you create content, the model enables you to use your words to make content how you want. So, you can make different content like images, audio, vectors, videos, 3D, and creative ingredients, including brushes, colour gradients and video transformations.

It allows users to generate uncountable different content to make changes repeatedly. Firefly will be integrated directly by Adobe into the industry-leading tools & services. As a result, you can leverage the power of productive artificial intelligence within your workflows.

Recently, a beta was launched by Adobe for this model displaying how skilled & experienced makers can create fantastic text effects and top-quality pictures. According to Adobe, the technology's power can't be understood without the imagination to fuel it. Here, we are going to mention the names of the applications which will get benefitted from Adobe Firely integration: Adobe Express, Adobe Experience Manager, Adobe Photoshop and Adobe Illustrator.

Provide Assistance to creators to work more efficiently: 

According to a recent study from Adobe, 88% of brands said that the demand for content has doubled at least over the previous year, whereas two-thirds of people expect that it will grow five times over the next two years. Adobe is leveraging generative AI to ease this burdenwith solutions for working faster, smarter and with greater convenience – including the ability for customers to train Adobe Firefly with their collateral, generating content in their personal style or brand language.

Compensate makers: 

Like Adobe has previously done with Behance & Adobe Stock, the company's goal is to make productive AI so that customers can monetize their talents. A compensation model is developing for Adobe Stock contributors. As soon as the model will be out of beta, they will share details.

Firefly ecosystem: 

The model is expected to be available through APIs on different platforms letting customers integrate into custom workflows & automation.

Conclusion:

Adobe's new model empowers skilled customers to produce top-quality pictures & excellent text effects. Besides, the above-mentioned "Do Not Train" tag is especially for the makers who are unwilling to use their content in model training. The company plans to allow users to extend the model's training with the creative collateral.

Frequently Asked Questions

Q. How do you get Adobe Firefly?

You can get this as a standalone beta at firefly.adobe.com. The service intends to get feedback. Customers can request access to the beta to play with it.

Q. What is generative AI?

It is a kind of AI that translates ordinary words and other inputs into unique results.

Q. Where does Firefly get its data from?

This model gets training on a dataset of Adobe Stock, openly licensed work as well as public domain content where the copyright is expired.

Friday 17 March 2023

Next Generation of AI for Developers and Google Workspace

AI for Developers and Google Workspace

For many years, Google has been continuously invested in AI and offered advantages to individuals, businesses, and communities. Artificial intelligence, accessible to all, can help you to publish state-of-the-art research, build handy products or develop tools & resources.

You should know that at present, we are at a pivotal moment in our AI journey. The new innovations in artificial intelligence are making changes depending on our interaction with technology. Google has been developing big language models to bring these safely to the products.

For starting building with Google's best AI models via Google Cloud and a new prototyping environment named MakerSuite, businesses as well as developers are trying new APIs and products so that it can be safe, easy and scalable. The company is introducing new features in Google workspace that will help the users to harness the generative AI power for creating, collaborating, and connecting.

PaLM API & MakerSuite:

It is an excellent way for exploring and prototyping with generative AI applications. Many technology and platform shifts, including cloud computing, mobile computing, etc., have given inspiration to all developers so that they can begin new businesses, imagine new products, and transform the way of creation. People are now in the midst of another shift with artificial intelligence, which profoundly affects each industry.

If you are a developer who does experiments with AI, the PaLM API can help you a lot because it allows you to build safely on top of the best language models. Google is making an efficient model of a certain size and capabilities.

MakerSuite is an intuitive tool in th API, allowing you to prototype ideas quickly. Later, it will come with different features for prompt engineering, synthetic data generation, and custom-model tuning. In this case, you should know that safety tools support all of these. Some specific developers are capable of getting access to the PaLM API and MakerSuite in Private Preview. The waitlist will inform the developers who can access them.

Bring Generative AI Capabilities to Google Cloud:

As a developer, if you are willing to create your apps & models and customize them with generative AI, you can access artificial models (like PaLM) of Google on Google Cloud. New generative capabilities related to artificial intelligence will be available in the Google Cloud AI portfolio. Therefore, developers can access enterprise-level safety, security, and privacy and already integrate with Cloud solutions.

Generative AI Support in Vertex AI:-

Vertex AI of Google Cloud is used by Developers and businesses for the production & deployment of ML models and AI applications at scale. Google offers foundation models only to create text & pictures and over time with audio & video. As a Google Cloud customer, you can find models, make & modify prompts, fine-tune them with their data, and deploy apps using new technologies.

Generative AI App Builder:-

Nowadays, governments & businesses are seen to have the desire to make their AI-powered chat interfaces and digital assistants. Therefore, to make it happen, Google comes with Generative AI App Builder used to connect conversational AI flows with out-of-the-box search experiences and foundation models. These models help organizations to generate AI apps in minutes or hours.

New AI partnerships and programs:-

While Google has announced new Google Cloud AI products, they are committing to remain the most open cloud provider. They also expand the ecosystem of artificial intelligence and unique programs for technology partners, startups, and AI-focused software providers. From 14th March 2023, Vertex AI with Generative AI support and Generative AI App Builder became accessible to reliable testers.

New generative AI features in Workspace:

In Google workspace, AI-powered features are available and it has already benefited over three billion people. For instance, if you use Smart Compose in Gmail or auto-generated summaries in Google Docs, you will get benefited from this. Now Google wants to take the next step where it will bring some limited trusted testers to make writing procedure simpler than previous.

When you type in a topic in Gmail and Google Docs, you can see a draft made instantly for you. Therefore, Workspace saves time and effort for managers onboarding new employees. You can abbreviate the message from there or adjust the tone to become more professional. Everything is possible with some clicks. According to Google, they will roll out these features to testers very soon.

Scaling AI responsibly:

Generative AI is actually an awesome technology which is evolving rapidly and comes with complex challenges. It is why external and internal testers are invited to pressure test new experiences. Google users who use Google products to create and grow their businesses take these principles as commitments. Improving the artificial models is the primary target of Google being responsible in its approach and partnering with others.

Conclusion:

Generative AI has given a lot of chances like to help people to express themselves creatively, help developers to make modern apps, and transform how businesses & governments engage their customers. People should wait for more features which will be available in the months ahead.

Monday 13 February 2023

Bard AI

Bard AI

The most renowned technology available today in the market is artificial intelligence. It is useful in every field, like helping doctors to identify diseases, letting people access information in their language, and so on. Besides, it helps businesses in unlocking their potential. You will be glad to know that it can open new chances for the improvement of a billion lives. It is why Google re-oriented the company around AI 6 years ago.

Since then, the company has been investing in artificial intelligence across the board. Whereas Google AI and DeepMind are the future of it. In every six months, the scale of the biggest AI computations doubles. Besides, advanced generative AI and big language models try to catch people's imagination worldwide. Let's know about bard AI like what this is thi, the advantages we can get from it, and so on.

What is Google BARD AI?

BARD is the abbreviation of Bidirectional Attention Recurrent Denoising Autoencoder. Google developed this machine learning model to create top-quality natural language text. We can say this is a deep learning-based generative model also. It can make coherent text that is contextually relevant and perfect for different apps in natural language processing, including text generation, language translation, and chatbots.

It can create text of both types, coherent and contextually relevant. Remember that it can be achieved via the use of bidirectional attention mechanisms. BARD follows this mechanism to consider a word's old & future context at the time of text generating. Moreover, this model employs the denoising autoencoder architecture, with the help of which you can decrease the sound and irrelevant information in the generated text.

Due to its flexibility and customizable nature, you can make this fine-tuned for specific apps & domains. You can train the model on domain-specific text data for generating text that is more appropriate for apps like a medical or legal text. In addition, it is possible to add this to other machine learning models like language models or dialogue systems. Thus, you can generate more advanced conversational AI systems.

It can also handle multiple languages. As it is possible to train the model on text data from different languages. It lets you make text in various languages with high fluency & accuracy. As a result, you can use this in multilingual apps and for those companies that want their business to improve globally by reaching international markets.

It is also efficient and scalable, because of which it becomes suitable to deploy in large-scale production systems. People can use it on different hardware, like GPUs and TPUs. Besides, if you want enhanced performance and quicker response times, it allows you to parallelize this across several devices.

Overall, all these exceptional features offer this model the potential to revolutionize the way of interaction for businesses with clients & users. You can use the model for text generation, language translation, or chatbots. You can experience high scalability from this BARD AI model if you are a Developer.

Introducing Bard:

This company translates deep research into products. LaMDA stands for Language Model for Dialogue Applications. The company unveiled next-generation language and conversation abilities which LaMDA powers.

This LaMDA-powered experimental conversational AI service is called Bard. Before making this broadly available to all users in the future, the company has opened it up to a few trusted testers.

It is the combination of global knowledge, power, intelligence, and innovations of the large language models. The model can draw on information from the web so that it can offer fresh, top-quality responses.

Initially, Google released this with the lightweight version of LaMDA. You should know that the model needs very less computing power. As a result, it helps you to scale to more users and get more responses as feedback. So, add the external feedback with the internal testing to ensure the model's responses can fulfil the requirements of quality, safety, and groundedness. Google is excited for testing the phase as it may help people to learn more about the quality and speed of Bard.

Why is Google working on BARD AI?

Google has been working on this model to enhance the user experience and offer better results for the users. This one is a leading technology company that is part of the ongoing effort.

It also works to improve the accuracy and relevance of search results. While the system can realize the context, it can also create coherent text that is related contextually. Thus, the company can offer more accurate and relevant results. As you can use this model to handle many languages, the company helps to reach international markets. Thus, the outcomes will be available with high fluency and accuracy.

The company works on this also to offer a better user experience. What generates the system unique is that it can create human-like text. It allows Google to offer more natural language interactions.

The model can learn and adapt new things over time very easily. Using the advanced machine learning algorithm, the system basically improves the performance which needs to be fine-tuned in such a way so that it can meet the needs and preferences of the users. As a result, Google can offer a more personalized experience to the users.

Is Google Bard AI a competitor to ChatGPT?

Each large tech company works to develop artificial intelligence. So, we can say that all are competitors of each other, whereas the target of all of them is to deliver the ultimate experience to the users. Therefore, the competition is very hard as service quality matters. In addition, the AI model must be advanced to handle different types of user behavior.

The Bottom Line:

Google especially works on BARD AI to improve its capabilities of searching and to offer people more relevant results. Google incorporates AI into its offerings and makes itself the market's boss in artificial intelligence. Thus, they can set the standard for the industry.

Saturday 21 January 2023

New HomePod by Apple

New HomePod by Apple

On 18th January, Apple announced the HomePod which is a second-generation smart speaker that can provide next-level acoustics. While it comes with several innovative features & Siri intelligence, the speaker can allow you to enjoy an outstanding listening experience by providing advanced computational audio. In addition, the HomePod is compatible with Spatial Audio tracks.

This Homepad allows the users to create smart home automation using Siri, due to which they can manage regular tasks & control the smart home in several ways. Besides, it can notify the users when it detects the presence of smoke or carbon monoxide alarm in the home. You can check the humidity & temperature in a room using it. People can order this model online or from the Apple Store from February 3, Friday.

New HomePod Refined Design:

The eye-catching design of the HomePod includes a backlit touch surface. Whereas the transparent mesh fabric used for illumination from edge to edge. Besides, the speaker comes in two colors: white and midnight, which is a new color made of 100 % recycled mesh fabric. The speaker includes a woven power cable that can match the color of the model. New HomePod Acoustic Powerhouse:

Whereas this homepad comes with awesome audio quality, it can deliver high frequencies with deep bass. Moreover, it is equipped with a custom-engineered high-excursion woofer, powerful motor. On the other hand, the built-in bass-EQ mic allows the users to enjoy a powerful acoustic experience. In addition, the S7 chip comes with a combination of software and system-sensing technology, which are capable of providing more advanced computational audio. It can boost the potential of an acoustic system to deliver an incredible listening experience.

The room sensing technology enables you to detect sound reflections from nearby surfaces so that you can determine if it is freestanding or against a wall. This speaker can adapt sound in real-time using the technology. Whereas the beamforming array of five tweeters help to separate and beam ambient as well as direct audio.

It allows you to listen to more than a hundred million songs with Apple Music. Besides, it is possible to enjoy Spatial Audio using the speaker. You can use it as a stereo pair. In addition, the speaker can give you a home theatre experience when you use it with Apple TV 4K. While it is possible to access music with Siri using it, you can also search by artist, song, lyrics, decade, genre, mood, or activity.

Experience with several HomePod Speakers:

When you use two HomePod or HomePod mini speakers or more than that, you can get the benefits of some useful features. You only have to say "Hey Siri" using multi-room audio with AirPlay. Otherwise, it is possible to play the same music on many HomePod speakers by touching & holding the speaker's top position. Besides, you can play various music on several HomePod speakers. It is even possible to use it as an intercom allowing you to broadcast messages to another room.

Two speakers of this second generation enable you to make a stereo pair in the same area. With the help of this stereo pair, you can separate the left and right channels. This Stereo pair can play every channel in ideal harmony. Therefore, it can generate a better immersive soundstage than traditional ones and deliver a groundbreaking listening experience, making the model stand out from others.

Integration with Apple Ecosystem:

It is possible to hand off a podcast, phone call, song, whatever is playing on the iPhone to the speaker directly using the leveraging ultra-wideband technology. You need to bring your mobile near the speaker to control whatever you play or receive your favorite song & podcast recommendations. You can see suggestions automatically. The speaker detects up to six voices. Therefore, each home member can listen to their favorite playlists. It also allows you to set events in the calendar or ask for reminders.

If you have an Apple TV 4K, you can get a great home theatre experience as the speaker can pair with it easily. You can use eARC (Enhanced Audio Return Channel) with Apple TV 4K. As a result, you can use the speaker as an audio system for all devices which are attached to the TV.

You can find your Apple device easily using the Find My on HomePod feature. For instance, you can locate your iphone to play sound on the misplaced device. Moreover, siri allows you to ask for the location of friends who share a location via the app.

New HomePod- A Smart Home Essential:

It comes with a default temperature & humidity sensor used to measure indoor environments. Therefore, you can switch on the fan automatically once a particular temperature is reached. Activating Siri allows you to control a device and make scenes like "Good Morning."

Matter Support:

While it maintains the best protection level, it allows smart home products to work across ecosystems. Alliance maintains the Matter standard along with other industry leaders, and Apple is a member of it. With the help of a speaker, you can control accessories that are Matter-enabled. It can also work as an essential home hub letting you access it when you are away from home.

Secure Customer Data:

A great core value of the company is to secure customer privacy. Remember that smart home communications are end-to-end encrypted. Therefore, Apple is unable to read this with camera recordings and HomeKit Secure Video. The audio request isn't stored by default when you use Siri. As a result, you can ensure that your privacy is secured.

New HomePod Pricing and Availability:

The second generation of HomePod in the United States can be ordered now at $299 at apple.com/store. Besides, it can be ordered from the Apple Store app in many nations, including Australia, Canada, China, France, Germany, Italy, Japan, Spain, the UK, the US, and eleven other nations. It will be available from February 3.

This speaker supports different models that are as follows:-

  • Second generations of iPhone SE and its later versions 
  • iPhone 8 and later versions which run iOS 16.3 or later 
  • iPad Pro, iPad (5th generation), and later, 
  • iPad Air (3rd generation) and later versions, or 
  • iPad mini (5th generation) and later versions which are compatible with iPadOS 16.3.

Customers in the United States get 3% daily cashback if they use their Apple Cards to purchase directly from the company.

Conclusion:

You should know that the speaker can decrease the environmental impact. This product fulfills all the high standards of Apple for energy efficiency. And it is totally mercury-, BFR-, PVC-, and beryllium-free. To design the package the manufacturers didn't use plastic wrap. The best thing is that 96% of the packaging is fiber-based. Thus, Apple gets closer to the target which is removing plastic from packaging totally by 2025.

Monday 26 December 2022

Client-Side Encryption for Gmail

Google announced that the client-side encryption is in beta for Workspace and education customers. The purpose of this is to keep the emails secured, those emails which are sent with the help of the platform's web version. When people are very much concerned about data security and online privacy, Google released the update. If you want to secure your private data, then it will be a welcome gift.

The Google Workspace Enterprise Plus, Education Plus, and Education Standard customers are able to sign up for this until January 20, 2023. Hence, you need to know that the update is unavailable for personal Google Accounts.

How to Set Up Client-Side Encryption for Gmail (beta):

The client-side encryption beta allows the users to send and receive encrypted emails both within & outside of the domain. You should know that Gmail bodies and attachments encrypt inline images, whereas email headers along with the subject, timestamps, and recipients lists are not encrypted in Gmail.

If you have Google Workspace Enterprise Plus, Education Plus, or Education Standard, it is possible to apply for the Gmail CSE beta. Before applying, you should follow these steps to generate the account.

Set up Gmail CSE beta:

1. Prepare your account:

Ensure that your company uses Google Workspace Enterprise Plus, Education Plus, or Education Standard.

Step 1) Set up your environment:

Make a new GCP project enabling the Gmail API in Google cloud console:

  • Your first job is to generate a new GCP project. Make sure that you have noted down the Project ID. 
  • Now, Google makes the project accessible to non-public, pre-release Gmail API endpoints.
  • Head towards the Google API Console. Next, you should enable the API for a new project.
  • After that, move to the Service accounts page to generate a service account.
  • Finally, you should save your private file key to the local system for the service account. 

Grant your service account domain-wide access:

  • You should sign in to the Admin console of Google workspace using your super administrator account. 
  • After that, you need to move to Security. Then, you should go to Access and data control, API controls. After that you need to go to a Domain-wide delegation. 
  • You have to use the service account's client ID ( made at the time of setup) so that you can add a new API client. 
  • Finally, you needed to use the account for the OAuth scopes: gmail.settings.basic, gmail.settings.sharing, gmail.readonly.

Create the test group of users for Gmail CSE:

  • You need to first sign in to the Admin console of Google workspace. Then, you need to move to Directory and Groups. 
  • Next, you should tap on the Create group. 
  • Now, your task now is to add users separately to the test group to let them use the CSE beta of Gmail. Ensure that you are not adding groups. 
  • Note down the email address of the test group.

Step 2) Prepare your certificates:

Create S/MIME certificates: Ensure that there is a S/MIME certificate for every user who is in the group and who is going to test Gmail CSE. You should know that senders & recipients need certificates. Therefore, for S/MIME, you need to move to Gmail-trusted CA certificates. If you are willing to use the test certificate authority, you should indicate that uploading the certificate to the Admin console of Google workspace trusts the root CA.

You need to use the key service to wrap S/MIME private keys. Hence, you should read the steps which can be found in the documentation of the service provider and follow them.

Step 3) Configure your key service and IdP:

  • First, you should set up the external key service— only the primary key service, not the secondary one. 
  • After that, you need to link the workspace to the key service. 
  • Then, you should connect Workspace to IdP or Identify Provider.

2. Apply for the Gmail CSE beta:

Submit the Test Application of the CSE Beta once you are ready. Ensure that you add the essential email address, Project ID, and test group domain.

As soon as the application is received, you will get an email in your account when it is ready.

Now, you should try setting up the CSE beta for the users.

3. Set up Gmail CSE beta:

Once there is a notification saying that the account is ready, you should go through the steps so that you can set up the CSE beta.

1. Turn on Gmail CSE:

Use the super administrator account to sign in to your Admin console of Google.

Head towards Security thereafter and move to Client-side encryption.

Now, you should tap on Gmail.

Go to the left panel, and choose the group which you have submitted in the enrollment form of your Gmail CSE.

You have to manage the User access so that you can set this to On. Hence, up to twenty-four hours may be required to take effect. However, it happens very quickly.

In case you remove a user from the group or turn Gmail CSE off for the group, all previous client-side encrypted content will remain accessible.

2. Upload users' certificates and wrapped private keys to Google:

You may use the Gmail API to upload the S/MIME certificate of a user and wrap the private key with the service account private key file. All users need to make a key pair and an Identity with the help of the key pair.

The process needs up to 24 hours to make the certificates available in Gmail. Finally, you can use Gmail CSE.

4. Send and receive Gmail CSE Emails:

Ensure that the sender and recipients turn on CSE, and have valid certificates. If any recipient forgets to carry a valid certificate, the sender is unable to send the email.

Send an encrypted email:

  • Your first task is to tap on Compose in Gmail. 
  • After that, you must tap on Message security in the message's right corner. 
  • Now, you should tap on Turn on in Additional encryption. 
  • Next, your task is to add the subject, recipients, and message content. 
  • Tap on Send. Then, you should sign into the identity provider when prompted.

Receive encrypted email:

  • As soon as you get the CSE encrypted message, you can see "Encrypted message" under the name of the sender. 
  • Your first job is to open your encrypted message in your inbox. When prompted, your job is to sign in to the identity provider. 
  • Then, you may see the message decrypted automatically in the Gmail browser window.

Try out Gmail CSE features:

There are a few features that you should try in the account.

  • Send and receive encrypted messages within the organization 
  • Send emails to external recipients 
  • Share digital signatures with external recipients 
  • Include quoted emails in a thread 
  • Receive emails from other mail clients like Microsoft Outlook and Apple Mail. 
  • Attach a file 
  • Paste an image 
  • Forward messages 
  • Save encrypted drafts 
  • Undo send

Conclusion:

Google Drive apps which are for iOS, Android, and desktop, compatible with client-side encryption. According to Google, the specification will be integrated into mobile apps for Meet and Calendar later. Google also said that the Client-side encryption secures the data and addresses a huge range of data sovereignty and compliance needs.