Xbmc unredirect fullscreen windows
Could your update solve this problem? Great job. KWin scripting and optimizations for 4. I guess this should probably fix the sluggish window resize performance with desktop effects enabled? No that one is unrelated and mostly caused by bad drivers. At UDS we discussed some ideas how to improve the resizing. But till those improvements hit kwin it will take at least one more year.
This is great news and thanks for all your efforts on improving KWin. There seems to be some sort of conflict between the two of the Kwin compositing effects: blur and shadow. I have noticed that when the blur effect is activated, window shadows menu shadows, exactly get corrupted, or drawn badly noticed this both on ATI and NVidia cards.
I dont know if it is fixed, dont think so. If you copy two images lets say image1. Maybe I might not be up to date, but does KDE turn off compositing when a window is in fullscreen? If I want to play a game, my computer does not handle the game and compositing very well.
My mediacenter does not handle HD-movies and compositing very well either. There is something which is calld unredirect fullscreen windows. It disables compositing for the screen, but is not the same as turning compositing completely off. The option is enabled by default. And this is enabled by default from 4. But there are distributions which turn it off by default.
Starting with 4. Here is how you can optimize the blur filter: use a linear texture filter instead of nearest and modify your sampling coordinates to fall halfway between two texels instead of completely on each texel. This is a relatively not well-known trick used for blur filters in games.
For example when you want to volume up using multimedia key on keyboard — it shows volume bar or when you want to show menu moving cursor down in SMPlayer. Is there any chance that the blur plugin will run on my computer, if in 4. Thank you very much! It seemed you made a great job. Free software community is so strong because of such members as you! Worse, everything wich was running smooth before cube, wobble even resizing feels jerky as long as blur is enabled.
This sounds like something is consistantly repainting. Furthermore e. So is almost every effect. It has been like this since the blur effect was enabled again I think 4. Please tell me if I can provide any logs etc. I am really sorry, but KWin is really unusable compared to Compiz, and no one addresses this issue. The quick flicker causes no performance or usability issue.
When I start VirtualBox - problem still exists, noticeable delay in vm response while in fullscreen or seamless mode. I also have this problem as and I am running VirtualBox 1. I had the exact same issue. It also happens to me, however, it does even in Windowed mode. It happens also when releasing a drag on some place, and the system actually dropping it on another. Please reopen if still relevant with a recent VirtualBox release. Contact — Privacy policy — Terms of Use.
Note that this option might be removed without notice once the player's timing code does not inherently need to do these things anymore. By default, a detected value is used. Keep in mind that setting an incorrect value even if slightly incorrect can ruin video playback.
On multi-monitor systems, there is a chance that the detected value is from the wrong monitor. Set this option only if you have reason to believe the automatically determined value is wrong. Specify the hardware video decoding API that should be used if possible.
Whether hardware decoding is actually done depends on the video codec. If hardware decoding is not possible, mpv will fall back on software decoding. Hardware decoding is not enabled by default, because it is typically an additional source of errors. It is worth using only if your CPU is too slow to decode a specific video. It toggles this option between auto and no.
Always enabling HW decoding by putting it into the config file is discouraged. Use one of the auto modes if you want to enable hardware decoding. Explicitly selecting the mode is mostly meant for testing and debugging. It's a bad idea to put explicit selection into the config file if you want thing to just keep working after updates and so on. Even if enabled, hardware decoding is still only white-listed for some codecs.
See --hwdec-codecs to enable hardware decoding in more cases. This still depends what VO you are using. Also note that if the first found method doesn't actually work, it will always fall back to software decoding, instead of trying the next method might matter on some Linux systems. Unlike auto , this will not try to enable unknown or known-to-be-bad methods.
In addition, this may disable hardware decoding in other situations when it's known to cause problems, but currently this mechanism is quite primitive. As an example for something that still causes problems: certain combinations of HEVC and Intel chips on Windows tend to cause mpv to crash, most likely due to driver bugs.
This selects modes like vaapi-copy and so on. If none of these work, hardware decoding is disabled. This mode is usually guaranteed to incur no additional quality loss compared to software decoding assuming modern codecs and an error free video stream , and will allow CPU processing with video filters.
This mode works with all video filters and VOs. Because these copy the decoded video back to system RAM, they're often less efficient than the direct modes, and may not help too much over software decoding. Currently, only the vaapi , nvdec and cuda methods work with Vulkan.
It also requires the opengl EGL backend. Pass weave or leave the option unset to not attempt any deinterlacing. In theory, hardware decoding does not reduce video quality at least for the codecs h and HEVC.
However, due to restrictions in video output APIs, as well as bugs in the actual hardware decoders, there can be some loss, or even blatantly incorrect results. This means certain colorspaces may not display correctly, and certain filtering such as debanding cannot be applied in an ideal way. This will also usually force the use of low quality chroma scalers instead of the one specified by --cscale. In other cases, hardware decoding can also reduce the bit depth of the decoded image, which can introduce banding or precision loss for bit files.
However, vdpau doesn't support 10 bit or HDR encodings, so these limitations are unlikely to be relevant. Enabling deinterlacing or simply their respective post-processing filters will possibly at least reduce color quality by converting the output to a 8 bit format. It appears to always use BT. Some drivers appear to convert to limited range RGB, which gives a faded appearance.
In addition to driver-specific behavior, global system settings might affect this additionally. This can give incorrect results even with completely ordinary video sources. It can also sometimes cause massive framedrops for unknown reasons. Caution is advised, and nvdec should always be preferred. It always converts to YUV, which may be lossy, depending on how chroma sub-sampling is done during conversion.
It also discards the top left pixel of each frame for some reason. All other methods, in particular the copy-back methods like dxva2-copy etc. At the very least, they shouldn't affect the colors of the image. In particular, auto-copy will only select "safe" modes although potentially slower than other methods , but there's still no guarantee the chosen hardware decoder will actually work correctly.
In general, it's very strongly advised to avoid hardware decoding unless absolutely necessary, i. If you run into any weird decoding issues, frame glitches or discoloration, and you have --hwdec turned on, the first thing you should try is disabling it.
This option is for troubleshooting hwdec interop issues. Since it's a debugging option, its semantics may change at any time.
This is useful for the gpu and libmpv VOs for selecting which hwdec interop context to use exactly. Effectively it also can be used to block loading of certain backends. If set to auto default , the behavior depends on the VO: for gpu , it does nothing, and the interop context is loaded on demand when the decoder probes for --hwdec support. For libmpv , which has has no on-demand loading, this is equivalent to all. If set to all , it attempts to load all interop contexts at GL context creation time.
Other than that, a specific backend can be set, and the list of them can be queried with help mpv CLI only. Runtime changes to this are ignored the current option value is used whenever the renderer is created.
The old aliases --opengl-hwdec-interop and --hwdec-preload are barely related to this anymore, but will be somewhat compatible in some cases. Number of GPU frames hardware decoding should preallocate default: see --list-options output. Setting it too high simply wastes GPU memory and has no advantages. This value is used only for hardware decoding APIs which require preallocating surfaces known examples include d3d11va and vaapi.
For other APIs, frames are allocated as needed. The details depend on the libavcodec implementations of the hardware decoders. The required number of surfaces depends on dynamic runtime situations. The default is a fixed value that is thought to be sufficient for most uses.
But in certain situations, it may not be enough. Set the internal pixel format used by hardware decoding via --hwdec default no. The special value no selects an implementation specific standard format. Most decoder implementations support only one format, and will fail to initialize if the format is not supported. Some implementations might support multiple formats.
In particular, videotoolbox is known to require uyvy for good performance on some older hardware. For the OpenGL GPU backend, the default device used for decoding is the one being used to provide gpu output and in the vast majority of cases, only one GPU will be present. For the copy hwdecs, the default device will be the first device enumerated by the CUDA libraries - however that is done.
For the Vulkan GPU backend, decoding must always happen on the display device, and this option has no effect. Enables pan-and-scan functionality cropping the sides of e. The range controls how much of the image is cropped.
May not work with all video output drivers. This option has no effect if --video-unscaled option is used. Override video aspect ratio, in case aspect information is incorrect or missing in the file being played. Normally you should not set this. Try the various choices if you encounter video that has the wrong aspect ratio in mpv, but seems to be correct in other players. Disable scaling of the video. If the window is larger than the video, black bars are added.
Otherwise, the video is cropped, unless the option is set to downscale-big , in which case the video is fit to window. The video still can be influenced by the other --video This option disables the effect of --panscan. Note that the scaler algorithm may still be used, even if the video isn't scaled.
For example, this can influence chroma conversion. The video will also still be scaled in one dimension if the source uses non-square pixels e. This option is disabled if the --no-keepaspect option is used. Moves the displayed video rectangle by the given value in the X or Y direction. The unit is in fractions of the size of the scaled video the full size, even if parts of the video are not visible due to panscan or other options.
Rotate the video clockwise, in degrees. If no is given, the video is never rotated, even if the file has rotation metadata. The rotation value is added to the rotation metadata, which means the value 0 would rotate the video according to the rotation metadata. Adjust the video display scale factor by the given value. The parameter is given log 2. Multiply the video display size with the given value default: 1.
If a non-default value is used, this will be different from the window size, so video will be either cut off, or black bars are added. This value is multiplied with the value derived from --video-zoom and the normal video aspect aspect ratio. Moves the video rectangle within the black borders, which are usually added to pad the video to screen if video and screen aspect ratios are different. Set extra video margins on each border default: 0.
Each value is a ratio of the window size, using a range 0. The video is "boxed" by these margins. The window size is not changed. In particular it does not enlarge the window, and the margins will cause the video to be downscaled by default. This may or may not change in the future. Subtitles still may use the margins, depending on --sub-use-margins and similar options.
These options were created for the OSC. Some odd decisions, such as making the margin values a ratio instead of pixels , were made for the sake of the OSC. It's possible that these options may be replaced by ones that are more generally useful.
The behavior of these options may change to fit OSC requirements better, too. Works in --no-correct-pts mode only. Enable or disable interlacing default: no. Interlaced video shows ugly comb-like artifacts, which are visible on fast movement. Enabling this typically inserts the yadif video filter in order to deinterlace the video, or lets the video output apply deinterlacing if supported.
This behaves exactly like the deinterlace input property usually mapped to d. Keep in mind that this will conflict with manually inserted deinterlacing filters, unless you take care. Since mpv 0. Might be useful for scripts which just want to determine some file properties. For audio-only playback, any value greater than 0 will quit playback immediately after initialization. The value 0 works as with video.
Normally, output devices such as PC monitors use full range color levels. Providing full range output to a device expecting studio level input results in crushed blacks and whites, the reverse in dim gray blacks and dim whites. It is advisable to use your graphics driver's color range option instead, if available. Allow hardware decoding for a given list of codecs only. The special value all always allows all codecs.
Remove the prefix, e. By default, this is set to h,vc1,hevc,vp8,vp9,av1. This is usually only needed with broken GPUs, where a codec is reported as supported, but decoding causes more problems than it solves.
Fallback to software decoding if the hardware-accelerated decoder fails default: 3. If this is a number, then fallback will be triggered if N frames fail to decode in a row. Setting this to a higher number might break the playback start fallback: if a fallback happens, parts of the file will be skipped, approximately by to the number of packets that could not be decoded.
Values below an unspecified count will not have this problem, because mpv retains the packets. Enable direct rendering default: yes. If this is set to yes , the video will be decoded directly to GPU video memory or staging buffers. This can speed up video upload, and may help with large resolutions or slow hardware. This works only with the following VOs:.
Using video filters of any kind that write to the image data or output newly allocated frames will silently disable the DR code path. Pass AVOptions to libavcodec decoder. Skips the loop filter AKA deblocking during H. Since the filtered frame is supposed to be used as reference for decoding dependent frames, this has a worse effect on quality than not doing deblocking on e. MPEG-2 video.
But at least for high bitrate HDTV, this provides a big speedup with little visible quality loss. Use the given audio device. This consists of the audio output name, e. The default value for this option is auto , which tries every audio output in preference order with the default device. This outputs the device name in quotes, followed by a description. The device name is what you have to pass to the --audio-device option.
The list of audio devices can be retrieved by API by using the audio-device-list property. While the option normally takes one of the strings as indicated by the methods above, you can also force the device for most AOs by building it manually. However, the --ao option will strictly force a specific AO. To avoid confusion, don't use --ao and --audio-device together. MPlayer and mplayer2 required you to replace any ',' with '.
For example, to use the device named dmix:default , you had to do:. Enable exclusive output mode. In this mode, the system is usually locked out, and only mpv will be able to output audio. This only works for some audio outputs, such as wasapi and coreaudio. Other audio outputs silently ignore this options. They either have no concept of exclusive mode, or the mpv side of the implementation is missing. List of codecs for which compressed audio passthrough should be used. Possible codecs are ac3 , dts , dts-hd , eac3 , truehd.
Multiple codecs can be specified by separating them with ,. If both dts and dts-hd are specified, it behaves equivalent to specifying dts-hd only. In earlier mpv versions you could use --ad to force the spdif wrapper.
This does not work anymore. There is not much reason to use this. Specify a priority list of audio decoders to be used, according to their decoder name. When determining which decoder to use, the first decoder that matches the audio format is selected. If that is unavailable, the next decoder is used. Finally, it tries all other decoders that are not explicitly selected or rejected by the option. Both of these should not normally be used, because they break normal decoder auto-selection!
Both of these methods are deprecated. Use --audio-spdif instead. Set the startup volume. Negative values can be passed for compatibility, but are treated as 0. The current behavior is that softvol is always enabled, i. The other behaviors are not available anymore, although auto almost matches current behavior in most cases. The no behavior is still partially available through the ao-volume and ao-mute properties. But there are no options to reset these.
Values up to 6 are also accepted, but are purely experimental. This option only shows an effect if the AC-3 stream contains the required range compression information. The standard mandates that DRC is enabled by default, but mpv and some other players ignore this for the sake of better audio quality. Control which audio channels are output e. There are the following possibilities:. Use the system's preferred channel layout. If there is none such as when accessing a hardware device instead of the system mixer , force stereo.
Some audio outputs might simply accept any layout and do downmixing on their own. Send the audio device whatever it accepts, preferring the audio's original channel layout. Can cause issues with HDMI see the warning below. List of , -separated channel layouts which should be allowed. Technically, this only adjusts the filter chain output to the best matching layout in the list, and passes the result to the audio API. It's possible that the audio API will select a different channel layout.
Force a plain stereo downmix. This is a special-case of the previous item. See paragraphs below for implications. If a list of layouts is given, each item can be either an explicit channel layout name like 5. Channel numbers refer to default layouts, e. This also lists speaker names, which can be used to express arbitrary channel layouts e.
If the list of channel layouts has only 1 item, the decoder is asked to produce according output. This sometimes triggers decoder-downmix, which might be different from the normal mpv downmix. This happens because the decision whether to use decoder downmix happens long before the audio device is opened. If the channel layout of the media file i. You may need to change the channel layout of the system mixer to achieve your desired output as mpv does not have control over it.
Using auto can cause issues when using audio over HDMI. If a receiver gets an unsupported channel layout, random things can happen, such as dropping the additional channels, or adding noise. You are recommended to set an explicit whitelist of the layouts you want. Determines whether to display cover art when playing audio files and with what priority. It will display the first image found, and additional images are available as video tracks.
This is a path list option. Try to play consecutive audio files with no silence or disruption at the point of file change. Default: weak. This feature is implemented in a simple manner and relies on audio output device buffering to continue playback while moving from one file to another.
If playback of the new file starts slowly, for example because it is played from a remote network location or because you have specified cache settings that require time for the initial cache fill, then the buffered audio may run out before playback of the new file can start.
Set the maximum amplification level in percent default: A value of will allow you to adjust the volume up to about double the normal level. Load additional audio files matching the video filename. The parameter specifies how external audio files are matched. Equivalent to --sub-file-paths option, but for auto-loaded audio files.
Set the audio output minimum buffer. The audio device might actually create a larger buffer if it pleases. If the device creates a smaller buffer, additional audio is buffered in an additional software buffer. Making this larger will make soft-volume and other filters react slower, introduce additional issues on playback speed change, and block the player on audio format changes.
A smaller buffer might lead to audio dropouts. This option should be used for testing only. If a non-default value helps significantly, the mpv developers should be contacted. This can happen every time audio over HDMI is stopped and resumed. In order to compensate for this, you can enable this option to not to stop and restart audio on seeks, and fill the gaps with silence.
Likewise, when pausing playback, audio is not stopped, and silence is played while paused. Note that if no audio track is selected, the audio device will still be closed immediately. Enabling this option is strongly discouraged.
Changing styling and position does not work with all subtitles. Subtitles in ASS format are normally not changed intentionally, but overriding them can be controlled with --sub-ass-override.
They are now all in this section. If you use --sub-file only once, this subtitle file is displayed by default. If --sub-file is used multiple times, the subtitle to use can be switched at runtime by cycling subtitle tracks.
It's possible to show two subtitles at once: use --sid to select the first subtitle index, and --secondary-sid to select the second index. Select a secondary subtitle stream. This is similar to --sid.
If a secondary subtitle is selected, it will be rendered as toptitle i. There are some caveats associated with this feature. For example, bitmap subtitles will always be rendered in their usual position, so selecting a bitmap subtitle as secondary subtitle will result in overlapping subtitles.
Secondary subtitles are never shown on the terminal if video is disabled. Styling and interpretation of any formatting tags is disabled for the secondary subtitle. Internally, the same mechanism as --no-sub-ass is used to strip the styling.
If the main subtitle stream contains formatting tags which display the subtitle at the top of the screen, it will overlap with the secondary subtitle.
To prevent this, you could use --no-sub-ass to disable styling in the main subtitle stream. This affects ASS subtitles as well, and may lead to incorrect subtitle rendering. I get tearing with the option "Unredirect Fullscreen Windows" checked, but no tearing with it it unchecked. BartvanHeukelom same thing with me — Mina Michael. Sign up or log in Sign up using Google. Sign up using Facebook. Sign up using Email and Password. Post as a guest Name.
Email Required, but never shown. The Overflow Blog. Podcast Helping communities build their own LTE networks. Podcast Making Agile work for data science. Featured on Meta. New post summary designs on greatest hits now, everywhere else eventually. Linked 1.
0コメント