Categories
News

How to use the new ChatGPT Mentions feature for custom GPTs

Using ChatGPT Mentions feature for multiple custom GPTs

OpenAI has recently rolled out a new feature to ChatGPT which changes the way you can interact with the companies artificial intelligence. Last month OpenAI launched its highly anticipated GPT Store allowing users to share custom GPT AI models with other subscribers. The new update for ChatGPT, particularly focusing on the GPT-4 model is called  ChatGPT Mentions and uses the @  symbol to allow quick access to previously used custom GPT AI models.

This update is poised to make a substantial change the way we interact with ChatGPT, aiming to enhance productivity by allowing users to integrate various text-based custom GPT AI models into a single conversation seamlessly. This development is not just a small step forward; it’s a substantial leap that promises to transform the way we work with AI.

How to use GPT Mentions

Imagine being able to switch between different AI models as easily as tagging a friend in a social media post. With the latest ChatGPT update, this is now possible. By simply using the “@” sign, users can effortlessly transition between various GPTs without the hassle of managing multiple tabs or losing their place in a conversation. This feature is a game-changer for those who rely on AI to assist with various tasks, as the most relevant AI assistant is now just a keystroke away, all within the same chat interface.

The update also introduces the concept of tailored AI expertise on demand. Each GPT model is equipped with its own specialized skills and knowledge base, making it adept at handling specific tasks such as crunching data, translating languages, or crafting creative content. This means that users can now instantly call upon the right AI for the job at hand, streamlining their workflow and minimizing interruptions.

Here are some other articles you may find of interest on the subject of  :

Combine multiple custom GPTs in a single conversation

In the past, users had to rely on a plugin system to work with multiple GPTs, which could be clunky and disjointed. The new update, however, offers a more integrated experience. Users can now incorporate an unlimited number of GPTs within a single chat, greatly improving the ability to tackle complex tasks through a unified AI interface. This seamless integration is a significant step towards a more cohesive and efficient AI-assisted workflow.

Content creators, in particular, will find this update to be of great interest. The ability to utilize different GPTs for various stages of the content creation process—from generating ideas to optimizing for search engines—can greatly improve both the quality and efficiency of their work. This collaborative approach with AI models can be a powerful tool for anyone looking to enhance their content production process.

While this feature represents a significant advancement, it is still in the process of being perfected. At present, users need to have recently interacted with a GPT to integrate it smoothly into their conversation. However, developers are continuously working to refine this feature, with the goal of making these custom GPTs even more accessible and effective. The commitment to ongoing development suggests that future updates will bring even more enhancements, further improving the user experience.

The latest ChatGPT update with GPT-4 integration is reshaping the way we utilize AI in our daily tasks. The introduction of multiple specialized GPTs within a single conversation is not only streamlining workflows but also enriching the process of creating content. Although there are still some kinks to be ironed out, the direction of progress is evident. This update from OpenAI is poised to take productivity and efficiency to the next level, marking a significant milestone in the evolution of conversational AI.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Midjourney 6 Consistent Styles feature and updates this week

Latest Midjourney 6 updates and Consistent Style Reference

At the latest weekly office hours briefing, Midjourney has announced a series of updates that are set to enhance the experience for its users. The highlight of these updates is the introduction of version 6 Beta, which promises to deliver improved image quality and a more user-friendly interface. This new version is designed to make the creative process smoother and more intuitive, allowing users to focus on bringing their ideas to life.

One of the most exciting developments is the expansion of the alpha creation website, which now includes a new explore page. This feature provides users with a wider array of top images to draw inspiration from, going beyond the personalized feed that users are accustomed to. It’s a move that’s expected to spark creativity and provide fresh perspectives for users’ projects.

Midjourney Consistent Styles

Midjourney also late last night announced the release of the first algorithms for ‘Consistent Styles’ today. Midjourney are calling the feature “Style References“. They have been specifically designed to work similarly to image prompts where you give a URL to one or more images that ‘describe’ the consistent style you want to work over. This tool is aimed at helping users maintain visual consistency when generating a series of images or working on thematic collections. It’s a valuable asset for those who want to ensure that their work aligns with their artistic vision and remains cohesive throughout.

How to use Midjourney Style References :

  • Type --sref after your prompt and put one (or more) URLs to images like this --sref urlA urlB urlC
  • The image model will look at the image urls as ‘style references’ and try to make something that ‘matches’ their aesthetics
  • Set relative weights of styles like this --sref urlA::2 urlB::3 urlC::5
  • Set the total strength of the stylization via --sw 100 (100 is default, 0 is off, 1000 is maximum)
  • Regular image prompts must go before --sref like this/imagine cat ninja ImagePrompt1 ImagePrompt2 --sref stylePrompt1 stylePrompt2
  • This works for both V6 and Niji V6 (it does not work with V5 etc)

The Midjourney team explain :

  • We’ll likely update this in the next few weeks (and it may change things so be careful while it’s all in alpha).
  • If your prompt tends towards photorealism and you want a conflicting style like illustration you may still need to add some text to your prompt saying so
  • Style References have no direct effect on image prompts, only on jobs that contain at least one text prompt
  • Our plan will be to add a “Consistent Character” feature at later date that works the same with a --cref argument.

Midjourney updates this week

Here are some other articles you may find of interest on the subject of Midjourney styles and AI art creation :

Midjourney has also made improvements to the “describe” function, which is designed to better understand user prompts. This enhancement means that users will have to spend less time tweaking their prompts and can instead dedicate more time to the creative aspects of their projects.

For those who enjoy creating characters, the updates to the Niji model are particularly noteworthy. New features such as pan, zoom, and in-painting, as well as improvements in facial consistency, will make it easier for users to maintain the identity of their characters across different images. This is a significant step forward for anyone involved in narrative and character design.

The platform’s website itself is being redesigned to provide a more streamlined and accessible experience. This redesign is a response to community feedback and focuses on incorporating features that have been in high demand. The aim is to make the platform more intuitive and user-centric.

During the event, there was also a thoughtful discussion about artistic nudity, highlighting Midjourney’s commitment to balancing creative expression with community standards. This indicates that the platform is considering the diverse needs and values of its user base.

Looking ahead, Midjourney has announced that future updates will include improved text rendering, the ability to create images with transparent backgrounds, and more precise color control through hex code adjustments. These upcoming features are designed to give users even more control over their creative output, allowing for a level of precision that was previously unattainable.

The platform is also considering the addition of new features such as upscaling, seamless tiling, and the ability to export prompt history. These potential updates suggest that Midjourney is looking to broaden its appeal and cater to an even wider audience by expanding its capabilities.

The recent updates and the anticipation of future enhancements are a testament to Midjourney’s commitment to evolving in response to the needs of its community. The platform is clearly focused on improving image quality, making the user experience more accessible, and providing tools that enrich the creative process. For users, these changes are likely to make their creative journey more immersive and enjoyable, offering new possibilities for artistic expression and innovation.

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Google releases new January Pixel feature drop

Google Pixel

Google has announced the release of a new January feature drop for the Pixel smartphones, this update brings some new features which include the new Circle to Search feature that we saw with the Samsung Galaxy S24 launch.

Other new features included in this release are Magic Compose for Google Messages, the ability to respond with Photomoji, and more, Google is also releasing new Min Green color options for the Pixel 8 and Pixel 8 Pro smartphones.

Searching on your Pixel 8 and 8 Pro is getting easier with Circle to Search, rolling out January 31. Google AI unlocks a new way to search anything on your phone, without needing to switch apps. Just long press the Pixel home button or navigation bar and circle, highlight, scribble or tap what you see to get more information from Search, right where you are. Use it to find what clothes a creator wore in a video or get extra help with a tough crossword clue. And with multisearch’s latest AI-powered upgrades, you can ask a more complex question about an image you’re searching so it’s easier to learn more about the world around you, like whether a certain plant needs fertilizer.

Use Google’s generative AI technology to rewrite a drafted message in different styles with Magic Compose on Pixel 6 and newer. You can use this feature to make your messages more concise, professional or dramatic like Shakespeare himself. And this all happens on-device on Pixel 8 Pro, thanks to Gemini Nano, Google’s most efficient model built for on-device tasks.

You can find out more details about the latest Google Pixel feature drop over at Google’s website at the link below, we can expect these features to be released for other Pixel devices in the future.

Source Google

Filed Under: Android News, Mobile Phone News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Runway AI text-to-video Ambient Motion Control feature demo

New Runway AI text-to-video Ambient Motion Control feature demonstrated

Runway the text-to-video AI service is transforming the way we create videos and animations with a powerful new feature that allows users to add motion to static images with incredible precision. This ambient control setting is a breakthrough for those who use the platform, offering a sophisticated method to animate AI-generated content. Whether you’re looking to add a gentle sway to trees in a landscape or subtle expressions to a character’s face, this tool makes it possible.

The Ambient Motion Control feature is a leap forward for Runway text-to-video users, providing a refined way to animate AI-generated content. Imagine wanting to capture the subtle rustle of leaves or the nuanced expressions in a portrait that make it appear almost alive. With the ambient slider, you can adjust the intensity of the motion, customizing the animation to fit your vision. This user-friendly feature allows for the quick creation of different clips for comparison.

Runway text to video AI

Features of Runway

  • Pre-trained AI models: These models cover a variety of tasks, like generating photorealistic images or videos from text prompts, manipulating existing media like changing the style of a video or adding special effects, and analyzing content to identify objects or people.
  • Image of RunwayML AI model generating video from text prompt
  • No coding required: RunwayML’s interface is designed to be user-friendly and intuitive, even for those with no coding experience. You can access and use the various AI models with simple clicks and drags.
  • Customizable tools: The platform also allows users to train their own AI models and import models from other sources, giving them even more control over their creative process.
  • Community-driven: RunwayML has a thriving community of creators who share their work and collaborate on projects. This fosters a sense of inspiration and learning for everyone involved.

When you adjust the ambient settings, the impact on your videos is clear. A slight tweak can add a gentle movement to foliage, while a stronger setting can create the illusion of a windy day. For portraits, the technology can mimic realistic movements, such as hair fluttering in the breeze or the natural blink of an eye, giving your animations a sense of authenticity and life.

But the ambient control is just one part of what Runway text-to-video AI service offers. Others include camera controls and text prompts, which help direct the viewer’s attention and add narrative to your animation. To further enhance your work, you can use post-processing techniques with tools like Adobe After Effects to achieve a professional finish.

RunwayML text-to-video

  • AI Magic Tools: These are pre-trained models that let you perform various tasks with just a few clicks, such as generating different artistic styles for an image, changing the lighting or weather in a video, or adding facial expressions to a still image.
  • AI Training: This feature allows you to train your own custom AI models using RunwayML’s platform. This is helpful if you need a model that performs a specific task that is not already available in the pre-trained model library.
  • Video Editor: RunwayML also includes a full-featured video editor that you can use to edit your videos and add special effects.
  • Community: The RunwayML community is a great place to find inspiration, learn new things, and share your work with others.

By mastering the ambient controls and incorporating camera movements, you can produce animations that not only draw the viewer in but also fully immerse them in the story you want to tell. These creations go beyond simple videos; they are experiences that draw audiences into the worlds you create.

RunwayML’s ambient control setting within the motion brush feature opens up new possibilities for creativity. By experimenting with different images, artistic styles, and additional tools like camera controls and Adobe After Effects, you can create animations that are visually and emotionally compelling. As you become more skilled with these features, your work will stand out in the world of AI-generated content, captivating viewers with every frame. RunwayML is a powerful and versatile AI text to video platform that can be used to create all sorts of amazing things give it a try for yourself a free.

Image Credit :  RunwayML

Filed Under: Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

Samsung Health gets new Medications Tracking Feature

Samsung Health.

Samsung has announced that it is adding a new Medications tracking feature to Samsung Health for its Galaxy range of devices, this new featurer will allow uyou to track your medication and manage your health.

Upon entering the name of a select medication into Samsung Health, the Medications feature will provide users with detailed information that includes general descriptions as well as possible side effects. Adverse reactions that could occur from drug-to-drug interactions or if taken alongside certain food and substances such as caffeine and alcohol, are also provided. One example of this is, if a user is taking the prescription drug Simvastatin, Samsung Health will warn the user that the drug has been linked to serious side effects when combined with grapefruit juice. Users can even log the shape and color of their medications, allowing them to easily differentiate between the pills they are taking. Dosage, time of consumption and other details can also be added to avoid any potential confusion.

Users can set up alerts that remind them both when to take their medications and when they should consider refilling them. These alerts are fine-tuned to the individual user so the Medications feature is able to prioritize medications depending on their importance, with Samsung Health sending reminders ranging from “gentle” to “strong” depending on how important or urgent a given prescription is. For crucial medications, users can set a “strong” reminder that will display a full screen alert on their smartphone accompanied by a long tone. For supplements like vitamins, a simple pop-up reminder will appear that will not disturb the user. Galaxy Watch users will also receive reminders right on their wrist so they can stay on top of their medication schedules, even when away from their phones.

This could be a really useful feature and could help older users remember to take their medication when it is required, you can find out more details abotu the new Medications feature for Samsung Health at the link below.

Source Samsung

Filed Under: Android News, Technology News, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

How to Activate AI Feature on Samsung Galaxy Smartphones

Samsung Galaxy

Are you a Samsung Galaxy smartphone user yearning to unlock the full artistic potential of your device’s photos? You will be pleased to know that nestled within your phone is a powerful, yet often overlooked, AI-based feature that can transform your ordinary photos into extraordinary artistic masterpieces. The video below from Sakitech delves into the specifics of this hidden gem, available in Samsung’s OneUI 6.0, 5.0, and earlier versions.

Where to Find This Feature:

Primarily, this feature resides in the Gallery application of your Samsung Galaxy smartphone. It’s not just another tool; it’s your gateway to a world of artistic expression.

The Art of Photo Transformation:

Imagine taking a regular photo and, with a few taps, turning it into a work of art. This feature allows you to do just that. It’s like having a digital art studio in your pocket. You can take any photo, even a simple selfie, and give it an artistic makeover.

Accessing the Magic:

If you’re using OneUI 6.0, accessing this feature is a breeze. Simply tap on ‘Edit’ in your Gallery app, and then find the four dots at the bottom of the screen. This will open up a plethora of tools, with the ‘Style’ tool being your target, located conveniently on the right side.

Choose Your Artistic Style:

The Style tool is where the real fun begins. It offers a variety of effects to choose from, including color pencil, comic, watercolor, blue ink, pastel, marker, line art, oil paint, cubism, and pen and wash. Each style has its unique charm and can be adjusted for intensity, ensuring your photo perfectly captures your artistic vision.

Versatility Across Versions:

If you’re using an older version of OneUI, fear not. The process to access these effects may vary slightly, but the feature is still available, ensuring everyone can join in on the creative fun.

Applying Styles to Diverse Images:

The versatility of this feature is truly showcased when applied to different types of images. From landscapes to portraits, watch as your photos are transformed into professional-looking artworks.

A Special Touch for Human Subjects:

For those photos featuring people, there’s an extra trick up the feature’s sleeve. You can toggle the effect to apply selectively to either the person, the background, or both. This selective application can create stunning portraits where the subject and background complement each other beautifully.

Continuous Improvement:

What’s exciting is that this feature isn’t static. It has received updates, adding more styles to its already impressive repertoire. This commitment to enhancement means that the creative possibilities are continually expanding.

Summary

Now that you’re equipped with the knowledge of this hidden feature, it’s time to explore and experiment. Whether you’re a budding artist, a creative enthusiast, or just someone who appreciates a touch of flair in their photos, this feature opens up a world of possibilities. Transform your memories into art and let your creativity soar with your Samsung Galaxy smartphone.

Source & Image Credit: Sakitech

Filed Under: Apple, Guides, Mobile Phone News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.

Categories
News

How to use the new Midjourney Style Tune feature

How to use the new Midjourney Style Tune feature to add personality to your AI images

The  development team at Midjourney has recently introduced a new feature known as the Style Tune. This innovative tool allows users to create their own unique styles, offering a higher degree of customization and control over the model’s personality when creating AI artwork. Below is a guide kindly created by Future Tech Pilot providing an in-depth understanding of how to effectively use this new Midjourney Style Tune feature and what results you can expect. As well as how to tune it by making selections to create your unique style.

Midjourney’s AI art generator has always been at the forefront of innovation, and the recent introduction of the new Midjourney Style Tuner feature is a testament to this. This groundbreaking tool provides users with the ability to create their own unique styles, thereby offering an unprecedented level of customization and control over the model’s personality. This guide is designed to provide a comprehensive understanding of how to effectively harness the power of this feature.

What is the Style Tuner?

  • The Style Tuner lets you make your own Midjourney style
  • Even though we call it a “style” it’s a bit more than that, it’s really controlling the “personality” of the model and it influences everything from colors to character details
  • You can optimize specifically around a single prompt or setting (including non-square aspect ratios, image + text prompts and raw mode)

The Style Tuner is not just a feature; it’s a tool that unlocks a world of customization possibilities. By simply typing “/tune” in the Midjourney prompt box, users can initiate the process of creating a new style. The system then requests a simple prompt, which serves as the basis for the style. The Style Tuner then generates a variety of style options, enabling users to select and share their unique styles without any additional charges.

How to use Midjourney Style Tune feature

Other articles you may find of interest on the subject of AI art generation :

How do I use it?

  • Type /tune and then a prompt
  • Select how many base styles you want to generate (cost is proportional here)
  • After clicking submit it’ll show you estimated GPU time. Click to confirm.
  • A custom “Style Tuner” webpage will be created for you. A URL will be sent [via DM] when done.
  • Go to the Style Tuner page and select the styles you strongly like to make your own
  • Most guides/mods recommend selecting 5-10 styles (but any number works)
  • Use codes like this /imagine cat --style CODE
  • Remember you can make TONS of styles with a single Style Tuner! Always try a variations and play.

Create your own Midjourney style using the new tuning feature

  1. To begin using the Midjourney Style Tune, users need to type /tune in the Midjourney prompt box. The system will then request a prompt. It is recommended to keep this prompt simple to ensure the style remains applicable across a wide range of subjects.
  2. Creating a New Style – Upon entering the prompt, the Style Tuner initiates 32 jobs for 16 directions, generating 32 different style options. This process consumes GPU minutes. However, once the styles are created, selecting and sharing a style is free of charge.
  3. Choosing the Right Style Directions – The selection of style directions is a crucial aspect of using the Style Tuner. Users can compare two styles at a time, choosing one, the other, or neither if the options do not meet their preferences. It is recommended to choose between 5 and 10 directions to maintain a balance between specificity and versatility.
  4. Selecting and Deselecting Styles – As users make their choices, the Style Tuner dynamically updates the style code. This real-time adjustment allows users to see the impact of their choices immediately, facilitating more informed decisions.
  5. Using the Style Code in a Prompt – Once a style code is generated, it can be used in a prompt to apply the chosen style. This feature allows users to consistently apply their unique styles across different prompts without needing to articulate the look in words.
  6. Differences in Results -The inclusion or exclusion of the style in the prompt can lead to different results. Therefore, users are encouraged to experiment with including and excluding the style to understand its impact on the output.
  7. Blending Style Codes – The Style Tuner also offers the ability to blend style codes together. This feature allows for even greater customization, as users can combine different style codes to create a unique blend that suits their specific needs.
  8. The Style Tuner is a powerful tool in Midjourney’s software, offering users the ability to create and control unique styles. By understanding how to use this feature effectively, users can enhance their customization capabilities and create more personalized outputs.

Tips and tricks

  • You can generate random style codes via --style random (without a Style Tuner)
  • You can combine multiple codes via --style code1-code2
  • You can use --stylize to control the strength of your style code
  • You can take any style code you see and get the style tuner page for it by putting it at the end of this URL

https://tuner.midjourney.com/code/StyleCodeHere

  • Using the style tuner URL that someone else made does not cost any fast-hours

(unless you use them for making images with the codes) Please note: Styles are optimized for your prompt and may not always transfer as intended to other prompts (ie: a cat style may act unexpectedly for cities, but a cat style should transfer to a dog)

AI art generator

Midjourney is an independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species. They have developed an AI art generator also known as Midjourney AI, which functions similarly to other AI art generation tools, like OpenAI’s DALL-E or Google’s Imagen. Users can prompt the AI with descriptions, and it generates images that attempt to match those prompts. The nuances of these systems generally involve complex neural networks, such as variants of Generative Adversarial Networks (GANs) or Diffusion models, which can process natural language inputs and translate them into visual outputs.

While not all details of Midjourney’s underlying technology are public, it likely uses a large dataset of images and text to learn how to create visuals from descriptions. As with any AI system that learns from data, its output quality can vary depending on the specificity of the prompts, the diversity of the training data, and the particular biases present in that data.

The tool’s capabilities have been demonstrated in various showcases, where the AI has created imaginative and sometimes surreal artworks. Users have noted its ability to create detailed and cohesive images, but like any AI system, it may sometimes produce unexpected or undesired results. It is also a topic of discussion in terms of copyright and the ethics of AI-generated art, as it pertains to the originality of artwork and the creative process.

Filed Under: Guides, Top News





Latest timeswonderful Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, timeswonderful may earn an affiliate commission. Learn about our Disclosure Policy.