Search Results

Search found 3880 results on 156 pages for 'duplicate'.

Page 54/156 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • Design Pattern for building a Budget

    - by Scott
    So I've looked at the Builder Pattern, Abstract Interfaces, other design patterns, etc. - and I think I'm over thinking the simplicity behind what I'm trying to do, so I'm asking you guys for some help with either recommending a design pattern I should use, or an architecture style I'm not familiar with that fits my task. So I have one model that represents a Budget in my code. At a high level, it looks like this: public class Budget { public int Id { get; set; } public List<MonthlySummary> Months { get; set; } public float SavingsPriority { get; set; } public float DebtPriority { get; set; } public List<Savings> SavingsCollection { get; set; } public UserProjectionParameters UserProjectionParameters { get; set; } public List<Debt> DebtCollection { get; set; } public string Name { get; set; } public List<Expense> Expenses { get; set; } public List<Income> IncomeCollection { get; set; } public bool AutoSave { get; set; } public decimal AutoSaveAmount { get; set; } public FundType AutoSaveType { get; set; } public decimal TotalExcess { get; set; } public decimal AccountMinimum { get; set; } } To go into more detail about some of the properties here shouldn't be necessary, but if you have any questions about those I will fill more out for you guys. Now, I'm trying to create code that builds one of these things based on a set of BudgetBuildParameters that the user will create and supply. There are going to be multiple types of these parameters. For example, on the sites homepage, there will be an example section where you can quickly see what your numbers look like, so they would be a much simpler set of SampleBudgetBuildParameters then say after a user registers and wants to create a fully filled out Budget using much more information in the DebtBudgetBuildParameters. Now a lot of these builds are going to be using similar code for certain tasks, but might want to also check the status of a users DebtCollection when formulating a monthly spending report, where as a Budget that only focuses on savings might not want to. I'd like to reduce code duplication (obviously) as much as possible, but in my head, every way I can think to do this would require using a base BudgetBuilderFactory to return the correct builder to the caller, and then creating say a SimpleBudgetBuilder that inherits from a BudgetBuilder, and put all duplicate code in the BudgetBuilder, and let the SimpleBudgetBuilder handle it's own cases. Problem is, a lot of the unique cases are unique to 2/4 builders, so there will be duplicate code somewhere in there obviously if I did that. Can anyone think of a better way to either explain a solution to this that may or may not be similar to mine, or a completely different pattern or way of thinking here? I really appreciate it.

    Read the article

  • Resolving collisions between dynamic game objects

    - by TheBroodian
    I've been building a 2D platformer for some time now, I'm getting to the point where I am adding dynamic objects to the stage for testing. This has prompted me to consider how I would like my character and other objects to behave when they collide. A typical staple in many 2D platformer type games is that the player takes damage upon touching an enemy, and then essentially becomes able to pass through enemies during a period of invulnerability, and at the same time, enemies are able to pass through eachother freely. I personally don't want to take this approach, it feels strange to me that the player should receive arbitrary damage for harmless contact to an enemy, despite whether the enemy is attacking or not, and I would like my enemies' interactions between each other (and my player) to be a little more organic, so to speak. In my head I sort of have this idea where a game object (player, or non player) would be able to push other game objects around by manner of 'pushing' each other out of one anothers' bounding boxes if there is an intersection, and maybe correlate the repelling force to how much their bounding boxes are intersecting. The problem I'm experiencing is I have no idea what the math might look like for something like this? I'll show what work I've done so far, it sort of works, but it's jittery, and generally not quite what I would pass in a functional game: //Clears the anti-duplicate buffer collisionRecord.Clear(); //pick a thing foreach (GameObject entity in entities) { //pick another thing foreach (GameObject subject in entities) { //check to make sure both things aren't the same thing if (!ReferenceEquals(entity, subject)) { //check to see if thing2 is in semi-near proximity to thing1 if (entity.WideProximityArea.Intersects(subject.CollisionRectangle) || entity.WideProximityArea.Contains(subject.CollisionRectangle)) { //check to see if thing2 and thing1 are colliding. if (entity.CollisionRectangle.Intersects(subject.CollisionRectangle) || entity.CollisionRectangle.Contains(subject.CollisionRectangle) || subject.CollisionRectangle.Contains(entity.CollisionRectangle)) { //check if we've already resolved their collision or not. if (!collisionRecord.ContainsKey(entity.GetHashCode())) { //more duplicate resolution checking. if (!collisionRecord.ContainsKey(subject.GetHashCode())) { //if thing1 is traveling right... if (entity.Velocity.X > 0) { //if it isn't too far to the right... if (subject.CollisionRectangle.Contains(new Microsoft.Xna.Framework.Rectangle(entity.CollisionRectangle.Right, entity.CollisionRectangle.Y, 1, entity.CollisionRectangle.Height)) || subject.CollisionRectangle.Intersects(new Microsoft.Xna.Framework.Rectangle(entity.CollisionRectangle.Right, entity.CollisionRectangle.Y, 1, entity.CollisionRectangle.Height))) { //Find how deep thing1 is intersecting thing2's collision box; float offset = entity.CollisionRectangle.Right - subject.CollisionRectangle.Left; //Move both things in opposite directions half the length of the intersection, pushing thing1 to the left, and thing2 to the right. entity.Velocities.Add(new Vector2(-(((offset * 4) * (float)gameTime.ElapsedGameTime.TotalMilliseconds)), 0)); subject.Velocities.Add(new Vector2((((offset * 4) * (float)gameTime.ElapsedGameTime.TotalMilliseconds)), 0)); } } //if thing1 is traveling left... if (entity.Velocity.X < 0) { //if thing1 isn't too far left... if (entity.CollisionRectangle.Contains(new Microsoft.Xna.Framework.Rectangle(subject.CollisionRectangle.Right, subject.CollisionRectangle.Y, 1, subject.CollisionRectangle.Height)) || entity.CollisionRectangle.Intersects(new Microsoft.Xna.Framework.Rectangle(subject.CollisionRectangle.Right, subject.CollisionRectangle.Y, 1, subject.CollisionRectangle.Height))) { //Find how deep thing1 is intersecting thing2's collision box; float offset = subject.CollisionRectangle.Right - entity.CollisionRectangle.Left; //Move both things in opposite directions half the length of the intersection, pushing thing1 to the right, and thing2 to the left. entity.Velocities.Add(new Vector2((((offset * 4) * (float)gameTime.ElapsedGameTime.TotalMilliseconds)), 0)); subject.Velocities.Add(new Vector2(-(((offset * 4) * (float)gameTime.ElapsedGameTime.TotalMilliseconds)), 0)); } } //Make record that thing1 and thing2 have interacted and the collision has been solved, so that if thing2 is picked next in the foreach loop, it isn't checked against thing1 a second time before the next update. collisionRecord.Add(entity.GetHashCode(), subject.GetHashCode()); } } } } } } } } One of the biggest issues with my code aside from the jitteriness is that if one character were to land on top of another character, it very suddenly and abruptly resolves the collision, whereas I would like a more subtle and gradual resolution. Any thoughts or ideas are incredibly welcome and helpful.

    Read the article

  • SEO/Google: How should I handle multiple countries and domains?

    - by Valorized
    Hello. I'm the webmaster of an online shop based in Austria (Europe). Therefore we registered "example.at". We also own different other domain names like "example-shop.com" and "example.info". Currently all those domains are redirected (301) to the .at one. Still available is: "example.net" and "example.org" (and .ws/.cc), unfortunately not available: .de/.eu The .com is currently owned by one of our partners, the contract ends in 2012 but until then we have no chance to get this one. Recently I read more about geo-targeting and I noticed ONE big deal. The tld ".at" is hardly recognised in Germany (google.de) whereas it is excellently listed in Austria (google.at). As a result of the .at I cannot set the target location manually (or to unlisted). More info: https://www.google.com/support/webmasters/bin/answer.py?answer=62399&hl=en This is a big problem. I looked at Google Analytics and - although Germany is 10x as big as Austria - there are more visits from Austria. So, how should I config the domain in order to get the best results in both, Germany and Austria? I thought of some solutions: First I could stop redirecting the .info. Then there would be a duplicate of the .at one. Moreover, in Webmastertools, I could set the target location of the .info to Germany. As the .at still targets Austria, both would be targeted - however I don't now if google punishes one of them because of the duplicate content? Same as 1. but with .net or .org (I think .info is not a "nice" domain and moreover I think search engines prefer .com, .net or .org to .info). Same as 1. (or 2.) but with a rel="canonical" on the new one (pointing to the .at). Con: I don't think this will improve the situation, because it still tells google that the .at one is more important, like: "if .info points to .at, the target may still be Austria". rel="canonical" on the .at pointing to the new (.info or .net or .org). However I fear that this will have a negative impact on the listing on google.at because: "Hey, the well-known .at is not important anymore, so let's focus on the .info which is not well-known." - Therefore: bad position in search results. Redirect .at to the new (.info or .net or .org) with a 301-Redirect. Con: Might be worse than 4, we might loose Page-Rank (or "the value of the page", because google says that page rank is not important anymore). Moreover this might be even more confusing for the customers. In 3. or 4. customers don't get redirected, they do not see the canonical-meta-tag. So, dear experts, please tell me what the best option would be! Thank you very much for your advice in advance and please excuse the long question. I really appreciate this network! Please note: It's exactly the same content AND language. In Austria we speak German.

    Read the article

  • LINQ To SQL ignore unique constraint exception and continue

    - by Martin
    I have a single table in a database called Users Users ------ ID (PK, Identity) Username (Unique Index) I have setup a unique index on the Username table to prevent duplicates. I am then enumerating through a collection and creating a new user in the database for each item. What I want to do is just insert a new user and ignore the exception if the unique key constraint is violated (as it's clearly a duplicate record in that case). This is to avoid having to craft where not exists kind of queries. First off, is this going to be any more efficient or should my insert code be checking for duplicates instead? I'm drawn more to the database having that logic as this prevents any other type of client from inserting duplicate data. My other issue is related to LINQ To SQL. I have the following code: public class TestRepo { DatabaseDataContext database = new DatabaseDataContext(); public void Add(string username) { database.Users.InsertOnSubmit(new User() { Username = username }); } public void Save() { database.SubmitChanges(); } } And then I iterate over a collection and insert new users, ignoring any exceptions: TestRepo repo = new TestRepo(); foreach (var name in new string[] { "Tim", "Bob", "John" }) { try { repo.Add(name); repo.Save(); } catch { } } The first time this is run, great I have three users in the table. If I remove the second one and run this code again, nothing is inserted. I expected the first insert to fail with the exception, the second to succeed (as I just removed that item from the DB) and the third to then fail. What seems to be happening is that once the SqlException is thrown (even though the loop continues to iterate) all of the next inserts fail - even when there isn't a row in the table that would cause a unique violation. Can anyone explain this? P.S. The only workaround I could find was to instantiate the repo each time before the insert, then it worked exactly as excepted - indicating that it's something to do with the LINQ To SQL DataContext. Thanks.

    Read the article

  • Command /Developer/usr/bin/dsymutil failed with exit code 10

    - by Evan Robinson
    I am getting the above error message randomly (so far as I can tell) on iPhone projects. Occasionally it will go away upon either: Clean Restart XCode Reboot Reinstall XCode But sometimes it won't. When it won't the only solution I have found is to take all the source material, import it into a new project, and then redo all the connections in IB. Then I'm good until it strikes again. Anybody have any suggestions? [update 20091030] I have tried building both debug and release versions, both full and lite versions. I've also tried switching the debug symbols from DWARF with external dSYM file to DWARF and to stabs. Clean builds in all formats make no differences. Permission repairs change nothing. Setting up a new user has no effect. Same error on the builds. Thanks for the suggestions! [Update 20091031] Here's an easier and (apparently) reliable workaround. It hinges upon the discovery that the problem is linked to a target not a project In the same project file, create a new target Option-Drag (copy) all the files from the BAD target 'Copy Bundle Resources' folder to the NEW target 'Copy Bundle Resources' folder Repeat (2) with 'Compile Sources' and 'Link Binary With Libraries' Duplicate the Info.plist file for the BAD target and name it correctly for the NEW target. Build the NEW target! [Update 20100222] Apparently an IDE bug, now apparently fixed, although Apple does not allow direct access to the original bug of a duplicate. I can no longer reproduce this behaviour, so hopefully it is dead, dead, dead.

    Read the article

  • Update multiple rows with known keys without inserting new rows if nonexistent keys are found

    - by Kirzilla
    Hello, Let's imagine that we have table items... table: items item_id INT PRIMARY AUTO_INCREMENT title VARCHAR(255) views INT Let's imagine that it is filled with something like (1, item-1, 10), (2, item-2, 10), (3, item-3, 15) I want to make multi update view for this items from data taken from this array [item_id] = [views] '1' => '50', '2' => '60', '3' => '70', '5' => '10' IMPORTANT! Please note that we have item_id=5 in array, but we don't have item_id=5 in database. I can use INSERT ... ON DUPLICATE KEY UPDATE, but this way image_id=5 will be inserted into talbe items. How to avoid inserting new key? I just want item_id=5 be skipped because it is not in table. Of course, before execution I can select existing keys from items table; then compare with keys in array; delete nonexistent keys and perform INSERT ... ON DUPLICATE KEY UPDATE. But maybe there is some more elegant solutions? Thank you.

    Read the article

  • Install CLSQL on Mac OS X

    - by Ken
    I have SBCL installed (via macports/darwinports) on my Intel Core 2 Duo Macbook running 10.5.8. I've installed several libraries like this: (require 'asdf) (require 'asdf-install) (asdf-install:install 'cl-who) But when I tried to install CLSQL this way ('clsql) after it downloaded, I got this: ... ; registering #<SYSTEM CLSQL-UFFI {123D9E01}> as CLSQL-UFFI ; $ cd /Users/ken/.sbcl/site/clsql-5.0.5/uffi/; make cc -arch x86_64 -arch i386 -bundle /usr/lib/bundle1.o -flat_namespace -undefined suppress clsql_uffi.c -o clsql_uffi.dylib ld: duplicate symbol dyld_stub_binding_helper in /usr/lib/bundle1.o and /usr/lib/bundle1.o for architecture i386 ld: duplicate symbol dyld_stub_binding_helper in /usr/lib/bundle1.o and /usr/lib/bundle1.o for architecture x86_64 collect2: ld returned 1 exit status collect2: ld returned 1 exit status lipo: can't open input file: /var/folders/Nf/Nf4o5ArDFaWBH2OwtnWM3E+++TQ/-Tmp-//ccJyZxou.out (No such file or directory) make: *** [clsql_uffi.so] Error 1 Is there something I forgot, or some trick to get it to build on Mac OS X? I know very little about C libraries on the Mac these days, so I don't even know where to start on this. Thanks!

    Read the article

  • Asp.Net MVC2 Clientside Validation problem with controls with prefixes

    - by alexander
    The problem is: when I put 2 controls of the same type on a page I need to specify different prefixes for binding. In this case the validation rules generated right after the form are incorrect. So how to get client validation work for the case?: the page contains: <% Html.RenderPartial(ViewLocations.Shared.PhoneEditPartial, new PhoneViewModel { Phone = person.PhonePhone, Prefix = "PhonePhone" }); Html.RenderPartial(ViewLocations.Shared.PhoneEditPartial, new PhoneViewModel { Phone = person.FaxPhone, Prefix = "FaxPhone" }); %> the control ViewUserControl<PhoneViewModel>: <%= Html.TextBox(Model.GetPrefixed("CountryCode"), Model.Phone.CountryCode) %> <%= Html.ValidationMessage("Phone.CountryCode", new { id = Model.GetPrefixed("CountryCode"), name = Model.GetPrefixed("CountryCode") })%> where Model.GetPrefixed("CountryCode") just returns "FaxPhone.CountryCode" or "PhonePhone.CountryCode" depending on prefix And here is the validation rules generated after the form. They are duplicated for the field name "Phone.CountryCode". While the desired result is 2 rules (required, number) for each of the FieldNames "FaxPhone.CountryCode", "PhonePhone.CountryCode" The question is somewhat duplicate of http://stackoverflow.com/questions/2675606/asp-net-mvc2-clientside-validation-and-duplicate-ids-problem but the advise to manually generate ids doesn't helps.

    Read the article

  • Time complexity for Search and Insert operation in sorted and unsorted arrays that includes duplicat

    - by iecut
    1-)For sorted array I have used Binary Search. We know that the worst case complexity for SEARCH operation in sorted array is O(lg N), if we use Binary Search, where N are the number of items in an array. What is the worst case complexity for the search operation in the array that includes duplicate values, using binary search?? Will it be the be the same O(lg N)?? Please correct me if I am wrong!! Also what is the worst case for INSERT operation in sorted array using binary search?? My guess is O(N).... is that right?? 2-) For unsorted array I have used Linear search. Now we have an unsorted array that also accepts duplicate element/values. What are the best worst case complexity for both SEARCH and INSERT operation. I think that we can use linear search that will give us O(N) worst case time for both search and delete operations. Can we do better than this for unsorted array and does the complexity changes if we accepts duplicates in the array.

    Read the article

  • RESTful idempotence

    - by DutrowLLC
    I'm designing a RESTful web service utilizing ROA(Resource oriented architecture). I'm trying to work out an efficient way to guarantee idempotence for PUT requests that create new resources in cases that the server designates the resource key. From my understanding, the traditional approach is to create a type of transaction resource such as /CREATE_PERSON. The the client-server interaction for creating a new person resource would be in two parts: Step 1: Get unique transaction id for creating the new PERSON resource::: **Client request:** GET /CREATE_PERSON **Server response:** 200 OK transaction-id:"as8yfasiob" Step 2: Create the new person resource in a request guaranteed to be unique by using the transaction id::: **Client request** PUT /CREATE_PERSON/{transaction_id} first_name="Big bubba" **Server response** 201 Created // (If the request is a duplicate, it would send this PersonKey="398u4nsdf" // same response without creating a new resource. It // would perhaps send an error response if the was used // on a transaction id non-duplicate request, but I have // control over the client, so I can guarantee that this // won't happen) The problem that I see with this approach is that it requires sending two requests to the server in order to do to single operation of creating a new PERSON resource. This creates a performance issues increasing the chance that the user will be waiting around for the client to complete their request. I've been trying to hash out ideas for eliminating the first step such as pre-sending transaction-id's with each request, but most of my ideas have other issues or involve sacrificing the statelessness of the application. Is there a way to do this?

    Read the article

  • How do I manage application configuration in ASP.NET?

    - by GlennS
    I am having difficulty with managing configuration of an ASP.Net application to deploy for different clients. The sheer volume of different settings which need twiddling takes up large amounts of time, and the current configuration methods are too complicated to enable us to push this responsibility out to support partners. Any suggestions for better methods to handle this or good sources of information to research? How we do things at present: Various xml configuration files which are referenced in Web.Config, for example an AppSettings.xml. Configurations for specific sites are kept in duplicate configuration files. Text files containing lists of data specific to the site In some cases, manual one-off changes to the database C# configuration for Windsor IOC. The specific issues we are having: Different sites with different features enabled, different external services we have to talk to and different business rules. Different deployment types (live, test, training) Configuration keys change across versions (get added, remove), meaning we have to update all the duplicate files We still need to be able to alter keys while the application is running Our current thoughts on how we might approach this are: Move the configuration into dynamically compiled code (possibly Boo, Binsor or JavaScript) Have some form of diffing/merging configuration: combine a default config with a live/test/training config and a site-specific config

    Read the article

  • How to exclude R*.class files from a proguard build

    - by Jeremy Bell
    I am one step away from making the method described here: http://stackoverflow.com/questions/2761443/targeting-android-with-scala-2-8-trunk-builds work with a single project (vs one project for scala and one for android). I've come across a problem. Using this input file (arguments to) proguard: -injars bin;lib/scala-library.jar(!META-INF/MANIFEST.MF,!library.properties) -outjar lib/scandroid.jar -libraryjars lib/android.jar -dontwarn -dontoptimize -dontobfuscate -dontskipnonpubliclibraryclasses -dontskipnonpubliclibraryclassmembers -keepattributes Exceptions,InnerClasses,Signature,Deprecated, SourceFile,LineNumberTable,*Annotation*,EnclosingMethod -keep public class org.scala.jeb.** { public protected *; } -keep public class org.xml.sax.EntityResolver { public protected *; } Proguard successfully builds scandroid.jar, however it appears to have included the generated R classes that the android resource builder generates and compiles. In this case, they are located in bin/org/jeb/R*.class. This is not what I want. The android dalvik converter cannot build because it thinks there is a duplicate of the R class (it's in scandroid and also the R*.class files). How can I modify the above proguard arguments to exclude the R*.class files from the scandroid.jar so the dalvik converter is happy? Edit: I should note that I tried adding ;bin/org/jeb/R.class;etc... to the -libraryjars argument, and that only seemed to cause it to complain about duplicate classes, and in addition proguard decided to exclude my scala class files too.

    Read the article

  • Copy data from different worksheet to a master worksheet but no duplicates

    - by sam
    hi all, i want to clarify my initial question for a possible solutions from any savior out there. Say i have 3 excel sheets one for each user for data entry located in separate workbooks to avoid excel share workbook problems. I also have a master sheet in another workbook where i want individual data enter on those sheets precisely sheets 1 should copy to the next available row of sheet 1 in the master sheet as the users enter them. i need a vba code that can copy each record without copying a duplicate row in the master sheet but highlight the duplicate row and lookup the initial record in a master sheet and return the name of the Imputer looking up the row say column ( I) where each user sign there initials after every row of entry like below. All 4 worksheets are formatted as below: lastname account cardno. type tran amount date location comments initials JAME 65478923 1975 cash 500 4/10/2010 miles st. this acct is resolve MLK BEN 52436745 1880 CHECK 400 4/12/2010 CAREY ST Ongoing investigation MLK JAME 65478923 1975 cash 500 4/10/2010 miles st. this acct is resolve MLK I need the vba to recognize duplicates only if the account number and the card number matches the initial records. So if the duplicates exist a pop up message should be display that a duplicates exist and return the initial Imputer say MLK in COLUMN( I) to any user inputting in the individual worksheets other than warehouse sheets (master). So please any idea will be appreciated.

    Read the article

  • How do you think while formulating Sql Queries. Is it an experience or a concept ?

    - by Shantanu Gupta
    I have been working on sql server and front end coding and have usually faced problem formulating queries. I do understand most of the concepts of sql that are needed in formulating queries but whenever some new functionality comes into the picture that can be dont using sql query, i do usually fails resolving them. I am very comfortable with select queries using joins and all such things but when it comes to DML operation i usually fails For every query that i never done before I usually finds uncomfortable with that while creating them. Whenever I goes for an interview I usually faces this problem. Is it their some concept behind approaching on formulating sql queries. Eg. I need to create an sql query such that A table contain single column having duplicate record. I need to remove duplicate records. I know i can find the solution to this query very easily on Googling, but I want to know how everyone comes to the desired result. Is it something like Practice Makes Man Perfect i.e. once you did it, next time you will be able to formulate or their is some logic or concept behind. I could have get my answer of solving above problem simply by posting it on stackoverflow and i would have been with an answer within 5 to 10 minutes but I want to know the reason. How do you work on any new kind of query. Is it a major contribution of experience or some an implementation of concepts. Whenever I learns some new thing in coding section I tries to utilize it wherever I can use it. But here scenario seems to be changed because might be i am lagging in some concepts.

    Read the article

  • Are AJAX sites crawlable by search engines?

    - by frankadelic
    I had always assumed that AJAX-driven content was invisible to search engines. (i.e. content inserted into the DOM via XMLHTTPRequest) For example, in this site, the main content is loaded via AJAX request by the browser: http://www.trustedsource.org/query/terra.cl ...if you view this page with Javascript disabled, the main content area is blank. However, Google cache shows the full content after the AJAX load: http://74.125.155.132/search?q=cache:JqcT6EVDHBoJ:www.trustedsource.org/query/terra.cl+http://www.trustedsource.org/query/terra.cl&cd=1&hl=en&ct=clnk&gl=us So, apparently search engines do index content loaded by AJAX. Questions: Is this a new feature in search engines? Most postings on the web indicate that you have to publish duplicate static HTML content for search engines to find them. Are there any tricks to get an AJAX-driven content to be crawled by search engines (besides creating duplicate static HTML content). Will the AJAX-driven content be indexed if it is loaded from a separate subdomain? How about a separate domain?

    Read the article

  • Memory Bandwidth Performance for Modern Machines

    - by porgarmingduod
    I'm designing a real-time system that occasionally has to duplicate a large amount of memory. The memory consists of non-tiny regions, so I expect the copying performance will be fairly close to the maximum bandwidth the relevant components (CPU, RAM, MB) can do. This led me to wonder what kind of raw memory bandwidth modern commodity machine can muster? My aging Core2Duo gives me 1.5 GB/s if I use 1 thread to memcpy() (and understandably less if I memcpy() with both cores simultaneously.) While 1.5 GB is a fair amount of data, the real-time application I'm working on will have have something like 1/50th of a second, which means 30 MB. Basically, almost nothing. And perhaps worst of all, as I add multiple cores, I can process a lot more data without any increased performance for the needed duplication step. But a low-end Core2Due isn't exactly hot stuff these days. Are there any sites with information, such as actual benchmarks, on raw memory bandwidth on current and near-future hardware? Furthermore, for duplicating large amounts of data in memory, are there any shortcuts, or is memcpy() as good as it will get? Given a bunch of cores with nothing to do but duplicate as much memory as possible in a short amount of time, what's the best I can do?

    Read the article

  • UPDATE query that fixes orphaned records

    - by Jed
    I have an Access database that has two tables that are related by PK/FK. Unfortunately, the database tables have allowed for duplicate/redundant records and has made the database a bit screwy. I am trying to figure out a SQL statement that will fix the problem. To better explain the problem and goal, I have created example tables to use as reference: You'll notice there are two tables, a Student table and a TestScore table where StudentID is the PK/FK. The Student table contains duplicate records for students John, Sally, Tommy, and Suzy. In other words the John's with StudentID's 1 and 5 are the same person, Sally 2 and 6 are the same person, and so on. The TestScore table relates test scores with a student. Ignoring how/why the Student table allowed duplicates, etc - The goal I'm trying to accomplish is to update the TestScore table so that it replaces the StudentID's that have been disabled with the corresponding enabled StudentID. So, all StudentID's = 1 (John) will be updated to 5; all StudentID's = 2 (Sally) will be updated to 6, and so on. Here's the resultant TestScore table that I'm shooting for (Notice there is no longer any reference to the disabled StudentID's 1-4): Can you think of a query (compatible with MS Access's JET Engine) that can accomplish this goal? Or, maybe, you can offer some tips/perspectives that will point me in the right direction. Thanks.

    Read the article

  • What does the subversion error "Could not read status line" mean?

    - by Jergason
    Exact duplicate: SVN: Could not read status line: connection was closed by server This is not an exact duplicate. The other question was asking about getting the error in a specific situation, and the answer was vauge at best. This is a fairly basic question, but it is driving me nuts. I have set up a brand new repository at beanstalk.com. They give me the url, http://.svn.beanstalkapp.com/blog. They also automatically create the tag, trunk and branches folder in the repository. I have checked out the trunk folder and used svn add to add the new file. I am trying to do my first commit, but I get this error: Commit failed (details follow): CHECKOUT of '/foo/!svn/bln/1': Could not read status line: connection was closed by server. (http://user_name@my_name.svn.beanstalkapp.com) What does this mean, and what causes it? I have googled for a definition of what "Could not read status line" means, but was unable to find anything explaining it. edit: I was getting this error while trying to manipulate my repository from behind a firewall. I still don't know what was causing it, but I don't have this problem at home. Strangeness.

    Read the article

  • SQL Concurrent test update question

    - by ptoinson
    Howdy Folks, I have a SQLServer 2008 database in which I have a table for Tags. A tag is just an id and a name. The definition of the tags table looks like: CREATE TABLE [dbo].[Tag]( [ID] [int] IDENTITY(1,1) NOT NULL, [Name] [varchar](255) NOT NULL CONSTRAINT [PK_Tag] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ) Name is also a unique index. further I have several processes adding data to this table at a pretty rapid rate. These processes use a stored proc that looks like: ALTER PROC [dbo].[lg_Tag_Insert] @Name varchar(255) AS DECLARE @ID int SET @ID = (select ID from Tag where Name=@Name ) if @ID is null begin INSERT Tag(Name) VALUES (@Name) RETURN SCOPE_IDENTITY() end else begin return @ID end My issues is that, other than being a novice at concurrent database design, there seems to be a race condition that is causing me to occasionally get an error that I'm trying to enter duplicate keys (Name) into the DB. The error is: Cannot insert duplicate key row in object 'dbo.Tag' with unique index 'IX_Tag_Name'. This makes sense, I'm just not sure how to fix this. If it where code I would know how to lock the right areas. SQLServer is quite a different beast. First question is what is the proper way to code this 'check, then update pattern'? It seems I need to get an exclusive lock on the row during the check, rather than a shared lock, but it's not clear to me the best way to do that. Any help in the right direction will be greatly appreciated. Thanks in advance.

    Read the article

  • Visual Studio 2010 / ASP.NET MVC / Publish

    - by SevenCentral
    I just did a clean install on Windows 7 x64 Professional with the final release of Visual Studio 2010 Premium. In order to duplicate what I'm experiencing do the following in: Create a new ASP.NET MVC 2 Web Application Right click the project and select Properties On the Web tab, select "Use Local IIS Web Server" Click on Create Virtual Directory Save all Unload the project Edit the project file Change MvcBuildViews to true Save all Reload project Right click the project and select Publish Choose the file system publish method Enter a target location Choose Delete all existing files Select Publish Right click the project Select Publish Each time I do the above I get the following errror: "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level..." The error originates from obj\debug\package\packagetmp\web.config, relative to the project directory. I can repeat this all day long with any MVC 2 project I've built. In order to fix this problem, I need to set MvcBuildViews to false in the project file. That's not really an option. This wasn't a problem in Visual Studio 2008 and it seems to be an issue with the way the Publish command stages files beneath the project directory. Can anyone else duplicate this error? Is this a bug or by design? Is there a fix, workaround, etc...? Thanks.

    Read the article

  • Static library woes in iPhone 3.x with categories and C libraries

    - by hgpc
    I have a static library (let's call it S) that uses a category (NSData+Base64 from MGTwitterEngine) and a C library (MiniZip wrapped by ZipArchive). This static library is used in an iPhone 3.x project (let's call it A). To be able to use the MiniZip library I included its files in project A as well as the static library S. If not I get compilation errors. Project A works fine on the simulator. When I run it on the device, I get unrecognized selector errors when the category is used. As pointed out here, it seems there's a linker bug that affects categories in iPhone 3.x (http://stackoverflow.com/questions/1147676/categories-in-static-library-for-iphone-device-3-0). The workaround is to add -all_load to the Other Linker Flags of the project that references the static library. However, if I do this then I get duplicate symbol errors because I included the MiniZip libraries in project A. A workaround is to include the category files in project A as well. If I do this, project A works well in the device, but fails to build on the simulator because of duplicate symbol errors. How should I set up project A to make it work on the simulator and the device with the same configuration?

    Read the article

  • Visual Studio 2010 / ASP.NET MVC 2 / Publish Error

    - by SevenCentral
    I just did a clean install on Windows 7 x64 Professional with the final release of Visual Studio 2010 Premium. In order to duplicate what I'm experiencing do the following in: Create a new ASP.NET MVC 2 Web Application Right click the project and select Properties On the Web tab, select "Use Local IIS Web Server" Click on Create Virtual Directory Save all Unload the project Edit the project file Change MvcBuildViews to true Save all Reload project Right click the project and select Publish Choose the file system publish method Enter a target location Choose Delete all existing files Select Publish Right click the project Select Publish Each time I do the above I get the following errror: "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level..." The error originates from obj\debug\package\packagetmp\web.config, relative to the project directory. I can repeat this all day long with any MVC 2 project I've built. In order to fix this problem, I need to set MvcBuildViews to false in the project file. That's not really an option. This wasn't a problem in Visual Studio 2008 and it seems to be an issue with the way the Publish command stages files beneath the project directory. Can anyone else duplicate this error? Is this a bug or by design? Is there a fix, workaround, etc...? Thanks.

    Read the article

  • How do I improve my performance with this singly linked list struct within my program?

    - by Jesus
    Hey guys, I have a program that does operations of sets of strings. We have to implement functions such as addition and subtraction of two sets of strings. We are suppose to get it down to the point where performance if of O(N+M), where N,M are sets of strings. Right now, I believe my performance is at O(N*M), since I for each element of N, I go through every element of M. I'm particularly focused on getting the subtraction to the proper performance, as if I can get that down to proper performance, I believe I can carry that knowledge over to the rest of things I have to implement. The '-' operator is suppose to work like this, for example. Declare set1 to be an empty set. Declare set2 to be a set with { a b c } elements Declare set3 to be a set with ( b c d } elements set1 = set2 - set3 And now set1 is suppose to equal { a }. So basically, just remove any element from set3, that is also in set2. For the addition implementation (overloaded '+' operator), I also do the sorting of the strings (since we have to). All the functions work right now btw. So I was wondering if anyone could a) Confirm that currently I'm doing O(N*M) performance b) Give me some ideas/implementations on how to improve the performance to O(N+M) Note: I cannot add any member variables or functions to the class strSet or to the node structure. The implementation of the main program isn't very important, but I will post the code for my class definition and the implementation of the member functions: strSet2.h (Implementation of my class and struct) // Class to implement sets of strings // Implements operators for union, intersection, subtraction, // etc. for sets of strings // V1.1 15 Feb 2011 Added guard (#ifndef), deleted using namespace RCH #ifndef _STRSET_ #define _STRSET_ #include <iostream> #include <vector> #include <string> // Deleted: using namespace std; 15 Feb 2011 RCH struct node { std::string s1; node * next; }; class strSet { private: node * first; public: strSet (); // Create empty set strSet (std::string s); // Create singleton set strSet (const strSet &copy); // Copy constructor ~strSet (); // Destructor int SIZE() const; bool isMember (std::string s) const; strSet operator + (const strSet& rtSide); // Union strSet operator - (const strSet& rtSide); // Set subtraction strSet& operator = (const strSet& rtSide); // Assignment }; // End of strSet class #endif // _STRSET_ strSet2.cpp (implementation of member functions) #include <iostream> #include <vector> #include <string> #include "strset2.h" using namespace std; strSet::strSet() { first = NULL; } strSet::strSet(string s) { node *temp; temp = new node; temp->s1 = s; temp->next = NULL; first = temp; } strSet::strSet(const strSet& copy) { if(copy.first == NULL) { first = NULL; } else { node *n = copy.first; node *prev = NULL; while (n) { node *newNode = new node; newNode->s1 = n->s1; newNode->next = NULL; if (prev) { prev->next = newNode; } else { first = newNode; } prev = newNode; n = n->next; } } } strSet::~strSet() { if(first != NULL) { while(first->next != NULL) { node *nextNode = first->next; first->next = nextNode->next; delete nextNode; } } } int strSet::SIZE() const { int size = 0; node *temp = first; while(temp!=NULL) { size++; temp=temp->next; } return size; } bool strSet::isMember(string s) const { node *temp = first; while(temp != NULL) { if(temp->s1 == s) { return true; } temp = temp->next; } return false; } strSet strSet::operator + (const strSet& rtSide) { strSet newSet; newSet = *this; node *temp = rtSide.first; while(temp != NULL) { string newEle = temp->s1; if(!isMember(newEle)) { if(newSet.first==NULL) { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = NULL; newSet.first = newNode; } else if(newSet.SIZE() == 1) { if(newEle < newSet.first->s1) { node *tempNext = newSet.first; node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = tempNext; newSet.first = newNode; } else { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = NULL; newSet.first->next = newNode; } } else { node *prev = NULL; node *curr = newSet.first; while(curr != NULL) { if(newEle < curr->s1) { if(prev == NULL) { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = curr; newSet.first = newNode; break; } else { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = curr; prev->next = newNode; break; } } if(curr->next == NULL) { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = NULL; curr->next = newNode; break; } prev = curr; curr = curr->next; } } } temp = temp->next; } return newSet; } strSet strSet::operator - (const strSet& rtSide) { strSet newSet; newSet = *this; node *temp = rtSide.first; while(temp != NULL) { string element = temp->s1; node *prev = NULL; node *curr = newSet.first; while(curr != NULL) { if( element < curr->s1 ) break; if( curr->s1 == element ) { if( prev == NULL) { node *duplicate = curr; newSet.first = newSet.first->next; delete duplicate; break; } else { node *duplicate = curr; prev->next = curr->next; delete duplicate; break; } } prev = curr; curr = curr->next; } temp = temp->next; } return newSet; } strSet& strSet::operator = (const strSet& rtSide) { if(this != &rtSide) { if(first != NULL) { while(first->next != NULL) { node *nextNode = first->next; first->next = nextNode->next; delete nextNode; } } if(rtSide.first == NULL) { first = NULL; } else { node *n = rtSide.first; node *prev = NULL; while (n) { node *newNode = new node; newNode->s1 = n->s1; newNode->next = NULL; if (prev) { prev->next = newNode; } else { first = newNode; } prev = newNode; n = n->next; } } } return *this; }

    Read the article

  • Understanding sql queries formulation methodoloy. How do you think while formulating Sql Queries

    - by Shantanu Gupta
    I have been working on sql server and front end coding and have usually faced problem formulating queries. I do understand most of the concepts of sql that are needed in formulating queries but whenever some new functionality comes into the picture that can be dont using sql query, i do usually fails resolving them. I am very comfortable with select queries using joins and all such things but when it comes to DML operation i usually fails For every query that i never done before I usually finds uncomfortable with that while creating them. Whenever I goes for an interview I usually faces this problem. Is it their some concept behind approaching on formulating sql queries. Eg. I need to create an sql query such that A table contain single column having duplicate record. I need to remove duplicate records. I know i can find the solution to this query very easily on Googling, but I want to know how everyone comes to the desired result. Is it something like Practice Makes Man Perfect i.e. once you did it, next time you will be able to formulate or their is some logic or concept behind.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >