Search Results

Search found 6392 results on 256 pages for 'reduce duplicate'.

Page 22/256 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Fixing copy/paste for Remote Desktop Connection sessions [duplicate]

    - by netadictos
    Possible Duplicate: Can't copy and paste in Remote Desktop Connection session Recently I have been working with Remote Desktop Connection. I use it to access a virtual machine implemented through Hyper-V. I have had many problems with the simple operation of cut-and-paste from my machine to the virtual one. The link between my clipboard and the remote clipboard is often broken. It is usual that this happens, when I copy/cut in the remote machine and then copy in my computer and then paste in the remote machine, How do I fix this?

    Read the article

  • Dropping duplicate|redundant Unique Constraint from FILESTREAM table

    - by electricsk8
    I have a table with a FILESTREAM column, and it has two unique constraints specified for the same FILESTREAM column, ie: ALTER TABLE [dbo].[TableName] ADD CONSTRAINT [UQ_TableName_33C4988760FC61CA] UNIQUE NONCLUSTERED ([GUID_Column]); GO ALTER TABLE [dbo].[TableName] ADD CONSTRAINT [UQ_TableName_33C49887145C0A3F] UNIQUE NONCLUSTERED ([GUID_Column]); GO I'd like to drop one of the unique constraints, as they are duplicates. However, when I try and drop one of the two duplicate constraints, I receive the following error. "A table with FILESTREAM column(s) must have a non-NULL unique ROWGUID column." Anyone know how to remove one of the two constraints?

    Read the article

  • Duplicate Name Exists error from virtual PC with NAT network configuration

    - by Phred
    Whenever I change the network configuration to NAT for a virtual machine running under Virtual PC 2007, I get "A duplicate name exists on the network." error. There is no other machine on my network with this name and running the VM in any other network configuration doesn't cause this error. This seems to be a common problem with Virtual PC 2007 based on a google search but no-one seems to have a solution to it. So far, I've discovered that turning off NETBIOS over TCP causes the problem to go away but I need to join this VM to a domain and you can't do that if NETBIOS over TCP is turned off.

    Read the article

  • Can I set Windows default second-monitor behaviour to "Extend these displays"?

    - by MT_Head
    I travel to multiple offices (and multiple desks in those offices), and whenever possible I plug an external monitor into my laptop. Whenever I plug in a monitor I haven't used before, Windows defaults to "Duplicate these displays" - which messes up the arrangement of icons on my desktop if the external monitor is a different shape from my laptop's monitor. I then select "Extend these displays", and my laptop screen returns to its original shape - but my icons don't go back to their original arrangement. Grrrrr. Fast-forward a few days or weeks; I've got my icons arranged so I can find stuff again - then I go to a new office and it starts all over again. I'm tired of this. Is it possible to make "Extend these displays" the default behavior? I'm using Windows 8 x64 Home Premium, but I had the same complaint under Windows 7 x64 Ultimate. (Prior to that, I hadn't discovered the joy of dual displays. Ah, the time I wasted...)

    Read the article

  • De-duplicate Firefox bookmarks

    - by Zoredache
    What methods exist to de-duplicate Firefox bookmarks. As I search Google I find that there previously was a plugin called CheckPlaces, but that no longer seems to exist. Another popular suggestion seems to be AM-DeadLink, which I tried, but it completely trashed my bookmarks. (Fortunately I had a backup first, and yes I had closed Firefox first as instructed). I was trying to move all my youtube.com bookmarks into a folder. I tried doing a search, and then dragging the bookmarks into the folder. Apparently this creates a copy, instead of moving them as I expected. So now I have 3 of everything since I had tried a couple times.

    Read the article

  • A Duplicate name exists on the network

    - by Adam
    Recently we changed out office IT structure from having a dedicated server to be the DC, a dedicated server for the exchange etc... (Each running Windows Server 2003 R2) Now we have a single server running Windows SBS 2008 and created a new domain (with a different domain name) We then changed every PC so it connected to the new domain and renamed every PC with a new naming structure. After I had done this, we were getting several PCs that would get the following message just before the login screen (Alt+Ctrl+Del Screen) A Duplicate name exists on the network I have checked the ADUC and have removed the trouble PCs from the list and renamed each PC and changed the SID before connecting back onto the domain but still getting this message. I have tried everything that i can think of but still getting the problem. Any help would be greatly appreticated.

    Read the article

  • Find RARs with duplicate content

    - by Scott McClenning
    I need a utility to find RAR files that contain duplicate data (i.e. files within the RAR that hash the same, but could have different names). I can open the RARs and see the CRCs are the same, but I was hoping for a more automated process that would work in bulk (hundreds of files). Hashing the overall RAR won't help because the file contained within could have different names, or the archive could be compressed at different levels. If needed, a utility that would extract the contents of the RARs and then compare would work, but is not preferred. I would prefer a free utility for Windows, but a pay utility or a utility for Linux would be acceptable.

    Read the article

  • Optimisation <xsl:apply-templates/> for a set of tags.

    - by kalininew
    How it is possible to reduce this record? <xsl:template match="BR"> <br/> </xsl:template> <xsl:template match="B"> <strong><xsl:apply-templates /></strong> </xsl:template> <xsl:template match="STRONG"> <strong><xsl:apply-templates /></strong> </xsl:template> <xsl:template match="I"> <em><xsl:apply-templates /></em> </xsl:template> <xsl:template match="EM"> <em><xsl:apply-templates /></em> </xsl:template> <xsl:template match="OL"> <ol><xsl:apply-templates /></ol> </xsl:template> <xsl:template match="UL"> <ul><xsl:apply-templates /></ul> </xsl:template> <xsl:template match="LI"> <li><xsl:apply-templates /></li> </xsl:template> <xsl:template match="SUB"> <sub><xsl:apply-templates /></sub> </xsl:template> <xsl:template match="SUP"> <sup><xsl:apply-templates /></sup> </xsl:template> <xsl:template match="NOBR"> <nobr><xsl:apply-templates /></nobr> </xsl:template>

    Read the article

  • How to read reduce/shift conflicts in LR(1) DFA?

    - by greenoldman
    I am reading an explanation (awesome "Parsing Techniques" by D.Grune and C.J.H.Jacobs; p.293 in the 2nd edition) and I moved forward from my last question: How to get lookahead symbol when constructing LR(1) NFA for parser? Now I have such "problem" (maybe not a problem, but rather need of confirmation from some more knowledgeable people). The authors present state in LR(0) which has reduce/shift conflict. Then they build DFA for LR(1) for the same grammar. And now they say it does not have a conflict (lookaheads at the end): S -> E . eof E -> E . - T eof E -> E . - T - and there is an edge from this state labeled - but no labeled eof. Authors says, that on eof there will be reduce, on - there will be shift. However eof is for shift as well (as lookahead). So my personal understanding of LR(1) DFA is this -- you can drop lookaheads for shifts, because they serve no purpose now -- shifts rely on input, not on lookaheads -- and after that, remove duplicates. S -> E . eof E -> E . - T So the lookahead for reduce serves as input really, because at this stage (all required input is read) it is really incoming symbol right now. For shifts, the input symbols are on the edges. So my question is this -- am I actually right about dropping lookaheads for shifts (after fully constructing DFA)?

    Read the article

  • Steps to Investigate Cause of Web.Config Duplicate Section

    - by pauly
    Symptoms In IIS Dot Net 2.0 Integrated app pool: double clicking to view any web.config section results in a the following error dialog. "There was an error while performing this operation.... Fielname... web.config... Error: There is a duplicate..." Browsing to the URL displays: "Http 500.19" internal server error.. There is a duplicate... 'system.web.extensions/scripting/scriptResourceHandler' section defined...." Running the app from VS 2008 an "Unable to start debugging on the web server..." dialog is displayed. Things Tried Looked at other application directories on same IIS server. No problem view web.config contents or serving up the app. Removed and re-added the application in IIS. Checked out a new version of the source code. Reverted to prior versions of the web.config file. Looked for web.config files that might have duplicate sections in: Inetpub root. "C:\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG\machine.config" The "Views" subfolder of the ASP.Net MVC app. Checked out source code to another dev machine. Setup IIS 7 app folder. No problem with Web.config. Question If the reason for this error is another web.config file where else should I look? Are there other reasons for these symptoms?

    Read the article

  • Delete Duplicate records from large csv file C# .Net

    - by Sandhurst
    I have created a solution which read a large csv file currently 20-30 mb in size, I have tried to delete the duplicate rows based on certain column values that the user chooses at run time using the usual technique of finding duplicate rows but its so slow that it seems the program is not working at all. What other technique can be applied to remove duplicate records from a csv file Here's the code, definitely I am doing something wrong DataTable dtCSV = ReadCsv(file, columns); //columns is a list of string List column DataTable dt=RemoveDuplicateRecords(dtCSV, columns); private DataTable RemoveDuplicateRecords(DataTable dtCSV, List<string> columns) { DataView dv = dtCSV.DefaultView; string RowFilter=string.Empty; if(dt==null) dt = dv.ToTable().Clone(); DataRow row = dtCSV.Rows[0]; foreach (DataRow row in dtCSV.Rows) { try { RowFilter = string.Empty; foreach (string column in columns) { string col = column; RowFilter += "[" + col + "]" + "='" + row[col].ToString().Replace("'","''") + "' and "; } RowFilter = RowFilter.Substring(0, RowFilter.Length - 4); dv.RowFilter = RowFilter; DataRow dr = dt.NewRow(); bool result = RowExists(dt, RowFilter); if (!result) { dr.ItemArray = dv.ToTable().Rows[0].ItemArray; dt.Rows.Add(dr); } } catch (Exception ex) { } } return dt; }

    Read the article

  • MySQL Normalization stored procedure performance

    - by srkiNZ84
    Hi, I've written a stored procedure in MySQL to take values currently in a table and to "Normalize" them. This means that for each value passed to the stored procedure, it checks whether the value is already in the table. If it is, then it stores the id of that row in a variable. If the value is not in the table, it stores the newly inserted value's id. The stored procedure then takes the id's and inserts them into a table which is equivalent to the original de-normailized table, but this table is fully normalized and consists of mainly foreign keys. My problem with this design is that the stored procedure takes approximately 10ms or so to return, which is too long when you're trying to work through some 10million records. My suspicion is that the performance is to do with the way in which I'm doing the inserts. i.e. INSERT INTO TableA (first_value) VALUES (argument_from_sp) ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id); SET @TableAId = LAST_INSERT_ID(); The "ON DUPLICATE KEY UPDATE" is a bit of a hack, due to the fact that on a duplicate key I don't want to update anything but rather just return the id value of the row. If you miss this step though, the LAST_INSERT_ID() function returns the wrong value when you're trying to run the "SET ..." statement. Does anyone know of a better way to do this in MySQL? Thank you

    Read the article

  • Remove redundant SQL code

    - by Dave Jarvis
    Code The following code calculates the slope and intercept for a linear regression against a slathering of data. It then applies the equation y = mx + b against the same result set to calculate the value of the regression line for each row. Can the two separate sub-selects be joined so that the data and its slope/intercept are calculated without executing the data gathering part of the query twice? SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR * ymxb.SLOPE + ymxb.INTERCEPT as REGRESSION_LINE, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D, (SELECT ((avg(t.AMOUNT * t.YEAR)) - avg(t.AMOUNT) * avg(t.YEAR)) / (stddev( t.AMOUNT ) * stddev( t.YEAR )) as CORRELATION, ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM ( SELECT AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ) t ) ymxb WHERE $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Question How do I execute the duplicate bits only once per query, instead of twice? The duplicate bit is the WHERE clause: $X{ IN, C.ID, CityCode } AND SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < $P{Radius} AND S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND Y.YEAR BETWEEN 1900 AND 2009 AND M.YEAR_REF_ID = Y.ID AND M.CATEGORY_ID = $P{CategoryCode} AND M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' Related http://stackoverflow.com/questions/1595659/how-to-eliminate-duplicate-calculation-in-sql Thank you!

    Read the article

  • Duplicate Symbol Linker Error (C++ help)

    - by Vash265
    Hi. I'm learning some CSP (constraint satisfaction) theory stuff right now, and am using this library to parse XML files. I'm using Xcode as an IDE. My program compiles fine, but when it goes to link the files, I get a duplicate symbol error with the XMLParser_libxml2.hh file. My files are separated as such: A class header file that includes the XMLParser file above A class implementation file that include the class header file A main file that includes the class header file The duplicate symbol is occurring in main.o and classfile.o, but as far as I can tell, I'm not actually adding that .hh file twice. Full error: ld: duplicate symbol bool CSPXMLParser::UTF8String::to<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::basic_string<char, std::char_traits<char>, std::allocator<char> >&) constin /Users/vash265/CSP/Untitled/build/Untitled.build/Debug/Untitled.build/Objects-normal/x86_64/dStructFill.o and /Users/vash265/CSP/Untitled/build/Untitled.build/Debug/Untitled.build/Objects-normal/x86_64/main.o Copying the implementation of the class into the main file and taking the class implementation file out of the compilation target alleviates the error, but it's a disorganized mess this way, and I'll be adding more classes very soon (and it would be nice to have them in separate files). As I've come to understand it, this is caused by the file (XMLParser_libxml2.hh) having both the class and function definition and implementation in one file (and it seems as though this might have been necessary due to the use of templates in that 'header' file). Any ideas on how to get around sticking all my class files in my main.cpp? (I've tried ifdefs, they don't work).

    Read the article

  • Entityframework duplicate record on second insert

    - by Delysid
    I am building an application for recipe/meal planning, and i have come across a problem i cant seem to figure out. i have a table for units of measure, where i keep the used units in, i only want unique units in here (for grocery list calculation and so forth) but if i use a unit from the table on a recipe, the first time it is okay, nothing is inserted in units of measure, but the second time i get a "duplicate". i suspect it has something to do with entitykey, because the primary key is identity column on the sql server (2008 r2) for some reason it works to change the objectstate on some objects (courses, see code) and that does not generate a duplicate, but that does not work on the unit of measure my insert methods looks like this : public recipe Create(recipe recipe) { using (RecipeDataContext ctx = new RecipeDataContext()) { foreach (recipe_ingredient rec_ing in recipe.recipe_ingredient) { if (rec_ing.ingredient.ingredient_id == 0) { ingredient ing = (from _ing in ctx.ingredients where _ing.name == rec_ing.ingredient.name select _ing).FirstOrDefault(); if (ing != null) { rec_ing.ingredient_id = ing.ingredient_id; rec_ing.ingredient = null; } } if (rec_ing.unit_of_measure.unit_of_measure_id == 0) { unit_of_measure _uom = (from dbUom in ctx.unit_of_measure where dbUom.unit == rec_ing.unit_of_measure.unit select dbUom).FirstOrDefault(); if (_uom != null) { rec_ing.unit_of_measure_id = _uom.unit_of_measure_id; rec_ing.unit_of_measure = null; } } ctx.Recipes.AddObject(recipe); //for some reason it works to change object state of this, and not generate a duplicate ctx.ObjectStateManager.ChangeObjectState(recipe.courses[0], EntityState.Unchanged); } ctx.SaveChanges(); } return recipe; } My datamodel looks like this : http://i.imgur.com/NMwZv.png

    Read the article

  • BIND - why duplicate nameserver entries (@ and *)?

    - by user27465
    I had to manually tweak my DNS service providers BIND file. BIND file, created by professional hosting company, before: $ORIGIN mycoolsite.com. $TTL 300 @ SOA ns1.cheapreg.com. registry.cheapreg.com. ( ... ) @ IN 3600 NS ns1.cheapreg.com. @ IN 3600 NS ns2.cheapreg.com. @ IN 3600 A 199.9.99.85 @ IN 3600 A 199.9.99.86 * IN 3600 A 199.9.99.85 * IN 3600 A 199.9.99.86 www IN 3600 A 199.9.99.85 www IN 3600 A 199.9.99.86 BIND file, created by layman, after: $ORIGIN mycoolsite.com. $TTL 300 @ SOA ns1.cheapreg.com. registry.cheapreg.com. ( ... ) @ IN 3600 NS ns1.cheapreg.com. @ IN 3600 NS ns2.cheapreg.com. * IN 3600 A 219.94.116.50 * IN 3600 A 219.94.116.51 * IN 3600 A 219.94.116.52 The difference is that the "pro"-file has duplicated the nameserver entries, once for @, and once for *, and I haven't. Any reason I should also duplicate nameserver entries (@ and *) ?

    Read the article

  • Tracking down source of duplicate email messages in Outlook / Exchange environment

    - by Ken Pespisa
    I have a few users, who are also Blackberry users, that occasionally have duplicate emails generated from their "mailbox". I put mailbox in quotes because I'm not exactly sure where the duplicates are created. One of these users is in non-cached mode, and the other is in cached mode, and both experience the problem. In fact, the non-cached mode user was originally experiencing the problem while in cached mode, and I made the switch a few weeks ago to attempt to solve the problem. Today I discovered the issue still exists. I'm not sure if the fact that they are blackberry users could be causing the problem at all. I don't see how, but felt I should mention it anyway. Does anyone have ideas on how I might begin to troubleshoot this? I can see in the non-cached user's mailbox "Sent Items" that the message was sent only once. I confirmed the message does not state that there was a conflict and in fact that makes sense because they are in non-cached mode. On the server, we have a mail journaling feature turned on for our third-party mail archiving system, and I can see that that system sees two sent messages. And likewise, the recipient does in fact have two messages in their inbox with consecutive message IDs ([email protected]) and ([email protected]). It would seem to me that the duplicates are generated on the client, but is there a way to tell for sure?

    Read the article

  • Syncing two sheets, while being able to hide different data

    - by Joshua
    I'm pretty new to excel- so please bear with me. I have created a spreadsheet to organize gear by serial numbers and by who has it. This list is getting updated multiple times daily as gear shuffles regularly. I have gear that is assigned and unassigned. On the main sheet I have all the data, the way I want it to be organized. What I'm trying to do is duplicate this sheet, so that both sheets automatically keep the same data at all times, but on the first sheet I can hide all the unassigned gear, and view only the assigned gear, and then be able to narrow it down in groups using the hide function heavily. On the second sheet I want to be able to hide all of the assigned gear, and all the columns of gear that have no unassigned gear. End result will be that as gear is moved between individuals or is unassigned entirely, I make that adjustment on one sheet and the data stays the same on both, but the way I view that same sheet is different on both. If I'm making no sense just let me know and I'll try to explain again more clearly. Thanks

    Read the article

  • SMTP Client implementation [on hold]

    - by orif
    I'm implementing SMTP client. What should the client do once it already sent the "." at the end of the mail, but didn't receive "250 Ok"? This is how the conversation between the client and server look like: Server Response: 220 www.sample.com ESMTP Postfix Client Sending : HELO domain.com Server Response: 250 Hello domain.com Client Sending : MAIL FROM: <[email protected]> Server Response: 250 Ok Client Sending : RCPT TO: <[email protected]> Server Response: 250 Ok Client Sending : DATA Server Response: 354 End data with <CR><LF>.<CR><LF> Client Sending : Subject: Example Message Client Sending : From: [email protected] Client Sending : To: [email protected] Client Sending : Client Sending : TEST MAIL Client Sending : Client Sending : . Server Response: 250 Ok: queued as 23411 Client Sending : QUIT I'm not sure what should I do if the client sends "." and doesn't receive the 250 Ok - because of possible network error. Was the "." sent or not? Should the client resend the mail - and - maybe - duplicate the item, or not - and risk in losing an important mail item? Thank you.

    Read the article

  • A duplicate name has been detected on the TCP network

    - by MSedm
    When I installed my domain controller and DNS, I had 2 NIC on the server. Both NIC has its own IP address. NICs are not teamed, they are seperate and ip address are in the same subnet. Both IP address are now registered in the DNS. i found them in Forward and reverse lookup zone. Everything working ok except the following error in the event log. "A duplicate name has been detected on the TCP network......" Now I have realized that this is because of the second NIC. My question is if i disable the second NIC, what happen to those DNS record assiciated with the second ip address? How do I remove all the DNS recored for the disabled NIC? There are A record, some record with the name (same as parent folder), PTR record and may be more. How do i disable second NIC and remove all the associated DNS recoreds? Please help.

    Read the article

  • Can no longer duplicate display to external monitor on Windows 7

    - by rbeier
    We have a large TV at work - I connect my laptop to it to share my screen during meetings. Until today, my laptop display has been duplicating to the TV automatically when I connect the TV cable to the laptop. The display resolution would decrease automatically to be compatible with the TV. Today, however, it's stopped working. When I connect the cable to the TV, the display extends rather than duplicating. Using the Win+P key combination (or Fn+F7 on my Lenovo laptop), I can choose to duplicate the display - but when I do this, it ends up only displaying on the laptop. I can get it to display on the TV by hitting Win+P and choosing "projector only", but then I can't see what I'm doing on the laptop screen. I have a Lenovo W520 laptop running Windows 7, connected to the TV using a DisplayPort-to-HDMI converter cable. The TV's native resolution is 1280x720; the laptop's native resolution is 1600x900. I've tried booting with the TV cable already connected; I've tried manually lowering the display resolution on the laptop to 1280x720 before duplicating the display. Neither works. Does anyone have any other suggestions?

    Read the article

  • Windows 7 constantly accessing hard drive [duplicate]

    - by Zohar
    Possible Duplicate: Tool which finds which process is causing the heavy hard drive activity? Did you notice that on Windows 7 (I use 64-bit) the hard drive LED is constantly blinking, which means that the OS is constantly wearing the hard drive by accessing it? It's something related to the system process, and it even occurs in safe mode, so I don't think it's a third party software problem. Has anyone experienced this problem as well, and is it a Windows problem, or caused by something else? Edit: My indexing service is reduced to indexing only the Start Menu. Even if it was set for the whole computer, it would eventually stop; that's not it. My friends also suffer from the same problem. Please answer my first question: have any of you have seen a Windows 7 machine whose hard drive LED is at rest? I'm also trying to track down the offending process using procmon and Resource Monitor, and it actually seems like a system process. It could also be svchost.exe, and I'm not sure which file they are accessing since I see a lot of activity which I can't make sense of. It's loading system DLLs, accessing registry keys, and other nonsense.

    Read the article

  • How to reduce iOS AVPlayer start delay

    - by Bernt Habermeier
    Note, for the below question: All assets are local on the device -- no network streaming is taking place. The videos contain audio tracks. I'm working on an iOS application that requires playing video files with minimum delay to start the video clip in question. Unfortunately we do not know what specific video clip is next until we actually need to start it up. Specifically: When one video clip is playing, we will know what the next set of (roughly) 10 video clips are, but we don't know which one exactly, until it comes time to 'immediately' play the next clip. What I've done to look at actual start delays is to call addBoundaryTimeObserverForTimes on the video player, with a time period of one millisecond to see when the video actually started to play, and I take the difference of that time stamp with the first place in the code that indicates which asset to start playing. From what I've seen thus-far, I have found that using the combination of AVAsset loading, and then creating an AVPlayerItem from that once it's ready, and then waiting for AVPlayerStatusReadyToPlay before I call play, tends to take between 1 and 3 seconds to start the clip. I've since switched to what I think is roughly equivalent: calling [AVPlayerItem playerItemWithURL:] and waiting for AVPlayerItemStatusReadyToPlay to play. Roughly same performance. One thing I'm observing is that the first AVPlayer item load is slower than the rest. Seems one idea is to pre-flight the AVPlayer with a short / empty asset before trying to play the first video might be of good general practice. [http://stackoverflow.com/questions/900461/slow-start-for-avaudioplayer-the-first-time-a-sound-is-played] I'd love to get the video start times down as much as possible, and have some ideas of things to experiment with, but would like some guidance from anyone that might be able to help. Update: idea 7, below, as-implemented yields switching times of around 500 ms. This is an improvement, but it it'd be nice to get this even faster. Idea 1: Use N AVPlayers (won't work) Using ~ 10 AVPPlayer objects and start-and-pause all ~ 10 clips, and once we know which one we really need, switch to, and un-pause the correct AVPlayer, and start all over again for the next cycle. I don't think this works, because I've read there is roughly a limit of 4 active AVPlayer's in iOS. There was someone asking about this on StackOverflow here, and found out about the 4 AVPlayer limit: fast-switching-between-videos-using-avfoundation Idea 2: Use AVQueuePlayer (won't work) I don't believe that shoving 10 AVPlayerItems into an AVQueuePlayer would pre-load them all for seamless start. AVQueuePlayer is a queue, and I think it really only makes the next video in the queue ready for immediate playback. I don't know which one out of ~10 videos we do want to play back, until it's time to start that one. ios-avplayer-video-preloading Idea 3: Load, Play, and retain AVPlayerItems in background (not 100% sure yet -- but not looking good) I'm looking at if there is any benefit to load and play the first second of each video clip in the background (suppress video and audio output), and keep a reference to each AVPlayerItem, and when we know which item needs to be played for real, swap that one in, and swap the background AVPlayer with the active one. Rinse and Repeat. The theory would be that recently played AVPlayer/AVPlayerItem's may still hold some prepared resources which would make subsequent playback faster. So far, I have not seen benefits from this, but I might not have the AVPlayerLayer setup correctly for the background. I doubt this will really improve things from what I've seen. Idea 4: Use a different file format -- maybe one that is faster to load? I'm currently using .m4v's (video-MPEG4) H.264 format. I have not played around with other formats, but it may well be that some formats are faster to decode / get ready than others. Possible still using video-MPEG4 but with a different codec, or maybe quicktime? Maybe a lossless video format where decoding / setup is faster? Idea 5: Combination of lossless video format + AVQueuePlayer If there is a video format that is fast to load, but maybe where the file size is insane, one idea might be to pre-prepare the first 10 seconds of each video clip with a version that is boated but faster to load, but back that up with an asset that is encoded in H.264. Use an AVQueuePlayer, and add the first 10 seconds in the uncompressed file format, and follow that up with one that is in H.264 which gets up to 10 seconds of prepare/preload time. So I'd get 'the best' of both worlds: fast start times, but also benefits from a more compact format. Idea 6: Use a non-standard AVPlayer / write my own / use someone else's Given my needs, maybe I can't use AVPlayer, but have to resort to AVAssetReader, and decode the first few seconds (possibly write raw file to disk), and when it comes to playback, make use of the raw format to play it back fast. Seems like a huge project to me, and if I go about it in a naive way, it's unclear / unlikely to even work better. Each decoded and uncompressed video frame is 2.25 MB. Naively speaking -- if we go with ~ 30 fps for the video, I'd end up with ~60 MB/s read-from-disk requirement, which is probably impossible / pushing it. Obviously we'd have to do some level of image compression (perhaps native openGL/es compression formats via PVRTC)... but that's kind crazy. Maybe there is a library out there that I can use? Idea 7: Combine everything into a single movie asset, and seekToTime One idea that might be easier than some of the above, is to combine everything into a single movie, and use seekToTime. The thing is that we'd be jumping all around the place. Essentially random access into the movie. I think this may actually work out okay: avplayer-movie-playing-lag-in-ios5 Which approach do you think would be best? So far, I've not made that much progress in terms of reducing the lag.

    Read the article

  • how to reduce 2d array

    - by owca
    I have a 2d array, let's say like this : 2 0 8 9 3 0 -1 20 13 12 17 18 1 2 3 4 2 0 7 9 How to create an array reduced by let's say 2nd row and third column? 2 0 9 13 12 18 1 2 4 2 0 9

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >