国产精品亚洲专区无码唯爱网_久久免费观看国产精品88av_中文字幕精品亚洲无线码二区_99rv精品视频在线播放

<tr><label></label></tr>

    <sup id="P80hT"></sup>

      1. <noframes>
      2. <sup><tt></tt></sup>
        1. Welcome to Shenzhen Deren Manufacturing Co.Ltd
          Deren Precision Manuracturing Co.Ltd
          Focus on Custom Parts and Industrial Blades.
          Fine products,Craftmen service,10 years precision manufacturing.
          15814001449
          Hotline&Wechat

          News

          Contact Us

          You are here:Home >> News >> Industry information...

          Industry information

          Sora come out. Ai text to vedio bright our eyes.

          Time:2024-02-21 Views:13594
          1、 Introduction to Sora‘s Concept
          On February 16, 2024, OpenAI released the large modeling tool for text to video, Sora (using natural language to describe and generate videos). Once this news was released, global social media platforms and the entire world were once again shocked by OpenAI. The height of AI videos has been suddenly raised by Sora. It should be noted that cultural video tools such as Runway Pika are still breaking through the coherence of a few seconds, while Sora can directly generate a 60 second long one shot to the end video. It should be noted that Sora has not yet officially released, so this effect can already be achieved.
          The name Sora comes from the Japanese word for "sky" (そら sora), meaning "sky", to indicate its infinite creative potential.
          The advantage of Sora compared to the aforementioned AI video models is that it can accurately present details, understand the existence of objects in the physical world, and generate characters with rich emotions. Even this model can generate videos based on prompts, still images, and even fill in missing frames in existing videos.
          2、 The implementation path of Sora
          The significance of Sora lies in its once again pushing AIGC‘s upper limit in AI driven content creation. Prior to this, text models such as ChatGPT had already begun to assist in content creation, including the generation of illustrations and visuals, and even the use of virtual humans to create short videos. Sora, on the other hand, is a large model that focuses on video generation. By inputting text or images, videos can be edited in various ways, including generation, connection, and expansion. It belongs to the category of multimodal large models. This type of model has been extended and expanded on the basis of language models such as GPT.
          Sora uses a method similar to GPT-4 to manipulate text tokens to process video patches. The key innovation lies in treating video frames as patch sequences, similar to word tokens in language models, enabling them to effectively manage various video information. By combining text conditions, Sora is able to generate contextually relevant and visually coherent videos based on text prompts.
          In principle, Sora mainly achieves video training through three steps. Firstly, there is a video compression network that reduces the dimensionality of videos or images into a compact and efficient form. Next is spatiotemporal patch extraction, which decomposes the view information into smaller units, each containing a portion of the spatial and temporal information in the view, so that Sora can perform targeted processing in subsequent steps. Finally, video generation is achieved by decoding and encoding input text or images, and the Transformer model (i.e. ChatGPT basic converter) decides how to convert or combine these units to form the complete video content.
          Overall, the emergence of Sora will further promote the development of AI video generation and multimodal large models, bringing new possibilities to the field of content creation.
          3、 Sora‘s 6 Advantages
          The Daily Economic News reporter sorted out the report and summarized six advantages of Sora:
          (1) Accuracy and diversity: Sora can convert short text descriptions into high-definition videos that grow up to 1 minute. It can accurately interpret the text input provided by users and generate high-quality video clips with various scenes and characters. It covers a wide range of themes, from characters and animals to lush landscapes, urban scenes, gardens, and even underwater New York City, providing diverse content according to user requirements. According to Medium, Sora can accurately explain long prompts of up to 135 words.
          (2) Powerful language understanding: OpenAI utilizes the recapping technique of the Dall · E model to generate descriptive subtitles for visual training data, which not only improves the accuracy of the text but also enhances the overall quality of the video. In addition, similar to DALL · E 3, OpenAI also utilizes GPT technology to convert short user prompts into longer detailed translations and send them to video models. This enables Sora to accurately generate high-quality videos according to user prompts.
          (3) Generate videos from images/videos: Sora can not only convert text into videos, but also accept other types of input prompts, such as existing images or videos. This enables Sora to perform a wide range of image and video editing tasks, such as creating perfect loop videos, converting static images into animations, and expanding videos forward or backward. OpenAI presented a demo video generated from images based on DALL · E 2 and DALL · E 3 in the report. This not only proves Sora‘s powerful capabilities, but also demonstrates its infinite potential in the fields of image and video editing.
          (4) Video extension function: Due to the ability to accept diverse input prompts, users can create videos based on images or supplement existing videos. As a Transformer based diffusion model, Sora can also expand videos forward or backward along the timeline.
          (5) Excellent device compatibility: Sora has excellent sampling capabilities, ranging from 1920x1080p in widescreen to 1080x1920 in portrait, and can easily handle any video size between the two. This means that Sora can generate content that perfectly matches its original aspect ratio for various devices. Before generating high-resolution content, Sora can quickly create content prototypes at a small size.
          (6) Consistency and continuity between scenes and objects: Sora can generate videos with dynamic perspective changes, and the movement of characters and scene elements in three-dimensional space appears more natural. Sora is able to handle occlusion issues well. One problem with existing models is that when objects leave the field of view, they may not be able to track them. By providing multiple frame predictions at once, Sora ensures that the subject of the image remains unchanged even when temporarily out of view.
          4、 Disadvantages of Sora
          Although Sora is very powerful, OpenAI Sora has certain problems in simulating physical phenomena in complex scenes, understanding specific causal relationships, handling spatial details, and accurately describing events that change over time.
          In this video generated by Sora, we can see that the overall picture has a high degree of coherence, with excellent performance in terms of image quality, details, lighting, and color. However, when we observe carefully, we will find that the legs of the characters in the video are slightly twisted, and the movement of the steps does not match the overall tone of the picture.
          In this video, it can be seen that the number of dogs is increasing, and although the connection is very smooth during this process, it may have deviated from our initial requirements for this video.
          (1) Inaccurate simulation of physical interaction:
          The Sora model is not precise enough in simulating basic physical interactions, such as glass breakage. This may be because the model lacks sufficient examples of such physical events in the training data, or the model is unable to fully learn and understand the underlying principles of these complex physical processes.
          (2) Incorrect change in object state:
          When simulating interactions involving significant changes in object state, such as eating food, Sora may not always accurately reflect the changes. This indicates that the model may have limitations in understanding and predicting the dynamic process of object state changes.
          (3) Incoherence in long-term video samples:
          When generating long duration video samples, Sora may produce incoherent plots or details, which may be due to the model‘s difficulty in maintaining contextual consistency over long time spans.
          (4) The sudden appearance of an object:
          Objects may appear in videos for no reason, indicating that the model still needs to improve its understanding of spatial and temporal continuity.
          Here we need to introduce the concept of "world model"
          What is the world model? Let me give an example.
          In your memory, you know the weight of a cup of coffee. So when you want to pick up a cup of coffee, your brain accurately predicts how much force should be used. So, the cup was picked up smoothly. You didn‘t even realize it. But what if there happens to be no coffee in the cup? You will use a lot of force to grab a very light cup. Your hand can immediately feel something wrong. Then, you will add a note to your memory: the cup may also be empty. So, the next time you make a prediction, you won‘t be wrong. The more things you do, the more complex world models will form in your brain for more accurate prediction of the world‘s reactions. This is the way humans interact with the world: the world model.
          Videos generated with Sora may not always leave marks when bitten. It can also go wrong at times. But this is already very powerful and terrifying. Because "remember first, predict later" is the way humans understand the world. This mode of thinking is called the world model.
          There is a sentence in Sora‘s technical documentation:
          Our results suggest that scaling video generation models is a promising path towards building general purpose simulators of the physical world
          Translated:
          Our results indicate that expanding video generation models is a promising path towards building a universal physical world simulator.
          The meaning is that what OpenAI ultimately wants to do is not a tool for "cultural videos", but a universal "physical world simulator". That is the world model, modeling the real world.
        2. Previous:Nothing
        3. Next:Made in China, we are on road, just begining.??2024/01/05
        4. 15814001449
          Hotline&Wechat
          Address: 1st Floor, No. 67, Langkou Industrial Zone, Dalang Street, Longhua District, Shenzhen
          hcuEU 国产精品亚洲专区无码唯爱网_久久免费观看国产精品88av_中文字幕精品亚洲无线码二区_99rv精品视频在线播放
          <span id="nj8jr"><optgroup id="nj8jr"></optgroup></span>

          <rt id="nj8jr"><small id="nj8jr"><strike id="nj8jr"></strike></small></rt>

            亚洲成人综合网站| 91视频国产观看| 国产精品欧美精品| 色哟哟精品一区| 亚洲五月六月丁香激情| 欧美一区二区三区喷汁尤物| 精品一二三四区| 国产精品国产三级国产| 欧美色图在线观看| 国产又粗又猛又爽又黄91精品| 欧美韩日一区二区三区| 欧洲亚洲精品在线| 久久精品国产精品亚洲精品| 日本一区二区三区四区| 日本久久电影网| 久久99久久99小草精品免视看| 国产精品久久久久久久浪潮网站| 欧美性猛交一区二区三区精品| 久久丁香综合五月国产三级网站| 国产精品女人毛片| 欧美一区二区三区在线电影 | 美女视频网站久久| 国产精品第五页| 91精品国产综合久久久久久漫画 | 国产成人在线观看免费网站| 樱花草国产18久久久久| 欧美成人精品二区三区99精品| 99久久精品国产麻豆演员表| 男男视频亚洲欧美| 中文字幕亚洲视频| 欧美成人video| 一本大道久久a久久精品综合| 乱中年女人伦av一区二区| 日韩伦理av电影| 欧美草草影院在线视频| 一本到一区二区三区| 经典三级视频一区| 亚洲一区在线观看免费观看电影高清 | 欧美日韩亚洲综合在线 欧美亚洲特黄一级| 久久99在线观看| 伊人色综合久久天天| 精品久久久久久久久久久久久久久| 色综合色狠狠天天综合色| 国产一区二区三区视频在线播放| 亚洲国产精品欧美一二99| 国产精品久久久久久久岛一牛影视| 欧美人与性动xxxx| 91蜜桃网址入口| 狠狠色丁香婷婷综合| 午夜欧美电影在线观看| ...xxx性欧美| 2024国产精品| 日韩一区二区电影在线| 欧美性xxxxxxxx| av欧美精品.com| 狠狠狠色丁香婷婷综合久久五月| 亚洲18色成人| 一区二区三区中文字幕电影 | 亚洲婷婷在线视频| 久久精品免费在线观看| 日韩美女视频在线| 欧美日韩一级片在线观看| 91蜜桃免费观看视频| 成人免费毛片片v| 国产在线看一区| 麻豆高清免费国产一区| 亚洲123区在线观看| 一区二区在线看| 自拍偷拍欧美精品| 中文字幕不卡的av| 国产亚洲一区二区在线观看| 欧美一级一区二区| 欧美日韩高清一区二区不卡 | 在线观看亚洲专区| 99re成人精品视频| 成人午夜av电影| 国产成人综合亚洲网站| 狠狠狠色丁香婷婷综合久久五月| 免费成人美女在线观看.| 天天av天天翘天天综合网| 一区二区三区电影在线播| 亚洲欧美日韩国产另类专区| 亚洲欧洲99久久| 一色屋精品亚洲香蕉网站| 国产精品另类一区| 国产精品无人区| 欧美国产成人精品| 国产精品婷婷午夜在线观看| 国产三级精品视频| 欧美激情综合在线| 中文在线一区二区| 国产精品看片你懂得| 中文字幕va一区二区三区| 中文一区二区完整视频在线观看| 中文一区二区在线观看| 国产精品久久毛片| 亚洲视频免费在线| 亚洲自拍偷拍麻豆| 亚洲成av人片观看| 日韩成人精品在线观看| 人人爽香蕉精品| 美女任你摸久久| 激情五月激情综合网| 国产精品一级片在线观看| 国产成人av一区二区三区在线| 国产精品18久久久久久久久 | 亚洲精品国产一区二区精华液| 亚洲女与黑人做爰| 亚洲成人免费视频| 免费不卡在线视频| 国内精品国产成人国产三级粉色 | 2欧美一区二区三区在线观看视频 337p粉嫩大胆噜噜噜噜噜91av | 91在线观看高清| 欧美在线观看禁18| 欧美一区二区三区视频在线| 精品日韩99亚洲| 中文字幕+乱码+中文字幕一区| 亚洲素人一区二区| 亚洲一级电影视频| 男女男精品视频| 国产精品2024| 色综合色狠狠天天综合色| 欧美日韩国产综合一区二区三区 | 欧美日韩免费观看一区二区三区| 91.麻豆视频| 久久夜色精品国产噜噜av| 欧美国产1区2区| 一区二区三区中文字幕在线观看| 舔着乳尖日韩一区| 国产一区二区女| 色综合天天狠狠| 欧美一区二区不卡视频| 国产偷国产偷精品高清尤物| 亚洲免费观看高清完整| 日日噜噜夜夜狠狠视频欧美人| 国产在线观看一区二区| 99综合电影在线视频| 欧美老肥妇做.爰bbww| wwwwxxxxx欧美| 一区二区三区四区中文字幕| 日本欧美加勒比视频| 成人网页在线观看| 欧美日韩三级一区| 久久精品视频一区二区三区| 亚洲男人的天堂网| 麻豆中文一区二区| 99久久国产综合色|国产精品| 欧美片在线播放| 国产欧美日韩另类视频免费观看| 伊人婷婷欧美激情| 国产在线视频精品一区| 色先锋资源久久综合| 日韩精品在线网站| 亚洲免费色视频| 国内久久婷婷综合| 欧美亚洲动漫精品| 国产午夜精品久久久久久免费视 | 处破女av一区二区| 69堂成人精品免费视频| 中文字幕精品一区二区精品绿巨人 | 自拍偷拍欧美激情| 久久国产日韩欧美精品| 91在线视频播放地址| 日韩女优av电影在线观看| 亚洲三级在线观看| 国产乱淫av一区二区三区| 欧美色涩在线第一页| 亚洲国产精品成人综合色在线婷婷 | 日韩成人av影视| 91色在线porny| 久久色视频免费观看| 亚洲成人免费电影| av资源网一区| 亚洲精品一线二线三线| 亚洲图片自拍偷拍| 不卡视频一二三| 337p日本欧洲亚洲大胆色噜噜| 亚洲福利视频一区| 91香蕉国产在线观看软件| 欧美精品一区二区三区视频| 午夜在线电影亚洲一区| 99久久综合国产精品| 精品国免费一区二区三区| 亚洲成av人影院| 色综合天天综合| 91香蕉视频mp4| 久久久久久久一区| 奇米影视一区二区三区小说| 91福利社在线观看| 国产精品久久午夜| 国产精品一品二品| 日韩欧美一区二区在线视频| 亚洲国产欧美一区二区三区丁香婷| 国产91对白在线观看九色| 精品久久久久99| 青青草国产精品97视觉盛宴| 欧美怡红院视频| 一区二区三区在线免费视频| 成人app网站| 日本一区二区三区四区|

            <tr><label></label></tr>

              <sup id="P80hT"></sup>

                1. <noframes>
                2. <sup><tt></tt></sup>