Search Results

Search found 9371 results on 375 pages for 'existing'.

Page 314/375 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • Cannot disable index during PL/SQL procedure

    - by nw
    I've written a PL/SQL procedure that would benefit if indexes were first disabled, then rebuilt upon completion. An existing thread suggests this approach: alter session set skip_unusable_indexes = true; alter index your_index unusable; [do import] alter index your_index rebuild; However, I get the following error on the first alter index statement: SQL Error: ORA-14048: a partition maintenance operation may not be combined with other operations ORA-06512: [...] 14048. 00000 - "a partition maintenance operation may not be combined with other operations" *Cause: ALTER TABLE or ALTER INDEX statement attempted to combine a partition maintenance operation (e.g. MOVE PARTITION) with some other operation (e.g. ADD PARTITION or PCTFREE which is illegal *Action: Ensure that a partition maintenance operation is the sole operation specified in ALTER TABLE or ALTER INDEX statement; operations other than those dealing with partitions, default attributes of partitioned tables/indices or specifying that a table be renamed (ALTER TABLE RENAME) may be combined at will The problem index is defined so: CREATE INDEX A11_IX1 ON STREETS ("SHAPE") INDEXTYPE IS "SDE"."ST_SPATIAL_INDEX" PARAMETERS ('ST_GRIDS=890,8010,72090 ST_SRID=2'); This is a custom index type from a 3rd-party vendor, and it causes chronic performance degradation during high-volume update/insert/delete operations. Any suggestions on how to work around this error? By the way, this error only occurs within a PL/SQL block.

    Read the article

  • IE7 (sometimes) not showing website properly

    - by Ra y Mon
    We are a bit desperate... We have launched our website http://www.buscounviaje.com We tested all browsers (IE6-8, Firefox, Safari, Chrome, ...) to make sure everything was OK. However, there are some users (IE7 and IE6) that are complaining that they see everything 'white' with black letters (i.e. CSS styles not being applied). One user said he was getting an "Error 0: Object expected" However, we do not see that error in Firebug, nor on our local installations of IE6&7. Other users with IE6&7 are also visualizing the web correctly. We have no idea where the problem could be, and we cannot test it because our IE6&7 work fine. Anyone sees the web page without styles and give us a hint on where the problem might be? Reasons we can think of... we are compressing js and css and some versions of IE6&7 are not able to decompress them we are trying to use a non-existing object in javascript and some versions of IE6&7 do not like it the cache does not seem to be the problem... we guided a user through emptying his cache and he could still not see the web site correctly.

    Read the article

  • Get a unique data in a SQL query

    - by Jensen
    Hi, I've a database who contain some datas in that form: icon(name, size, tag) (myicon.png, 16, 'twitter') (myicon.png, 32, 'twitter') (myicon.png, 128, 'twitter') (myicon.png, 256, 'twitter') (anothericon.png, 32, 'facebook') (anothericon.png, 128, 'facebook') (anothericon.png, 256, 'facebook') So as you see it, the name field is not uniq I can have multiple icons with the same name and they are separated with the size field. Now in PHP I have a query that get ONE icon set, for example : $dbQueryIcons = mysql_query("SELECT * FROM pl_icon WHERE tag LIKE '%".$SEARCH_QUERY."%' GROUP BY name ORDER BY id DESC LIMIT ".$firstEntry.", ".$CONFIG['icon_per_page']."") or die(mysql_error()); With this example if $tag contain 'twitter' it will show ONLY the first SQL data entry with the tag 'twitter', so it will be : (myicon.png, 16, 'twitter') This is what I want, but I would prefer the '128' size by default. Is this possible to tell SQL to send me only the 128 size when existing and if not another size ? In an another question someone give me a solution with the GROUP BY but in this case that don't run because we have a GROUP BY name. And if I delete the GROUP BY, it show me all size of the same icons. Thanks !

    Read the article

  • Android and fairly large SQLite datafiles

    - by SK9
    I'm starting an Android project, a port from an existing iPhone project I've completed. I have a fairly large read-only SQLite database, about 100Mb in all. It's called "mydata.sqlite". Where do I place this in my Eclipse workspace? It's too big for "assets". Next, how do I best get at the file? I would think to try (handling exceptions later) something like: SQLiteDatabase myDatabase = null; myDatabase = SQLiteDatabase.openDatabase(myPath, null, SQLiteDatabase.OPEN_READONLY); But I would then need the path string myPath and since I don't know where to put the resource I don't know what this needs to be. Can I put "mydata.sqlite" into "res/raw" (once I create "raw" in Eclipse?) and then referene it as a resource with "R.raw.mydata"? I would very much appreciate some direct help here, rather than a reference to a tutorial. I have checked tons of these, including those that are already cited here on stackoverflow. I've also gone through the "Notepad" project in the Android developer documents. However these and the documentation typically consider only new, empty or small databases. This should be a simple thing and given the time I've spent already it is perhaps easier to ask. Thanking you kindly in advance for your assistance.

    Read the article

  • What's a good way to write batch scripts in C#?

    - by Scott Bilas
    I would like to write simple scripts in C#. Stuff I would normally use .bat or 4NT .btm files for. Copying files, parsing text, asking user input, and so on. Fairly simple but doing this stuff right in a batch file is really hard (no exceptions for example). I'm familiar with command line "scripting" wrappers like AxScript so that gets me part of the way there. What I'm missing is the easy file-manipulation framework. I want to be able to do cd(".."), copy(srcFile, destFile) type functionality. Tools I have tried: NANT, which we use in our build process. Not a good scripting tool. Insanely verbose XML syntax and to add a simple function you must write an extension assembly. Can't do it inline. PowerShell. Looks great, but I just haven't been able to switch over to this as my primary shell. Too many differences from 4NT. Whatever I do needs to run from an ordinary command prompt and not require a special shell to run it through. Can PowerShell be used as a script executor? Perl/Python/Ruby. Really hate learning an entirely new language and framework just to do batch file operations. Haven't been able to dedicate the time I need to do this. Plus, we're a 99% .NET shop for our toolchain and I really want to leverage our existing experience and codebase. Are there frameworks out there that are trying to solve this problem of "make a batch file in C#" that you have used? I want the power of C#/.NET with the immediate-mode type functionality of a typical cmd.exe shell language. Am I alone in wanting something like this?

    Read the article

  • Use jquery ':contains' to find specific javascript within a span

    - by Rob
    This is my first time here, I hope that this is clear. So i will have code similar to this this. <span class="mediaSource ui-draggable" id="purchsePlay7915504"> <a href="" onclick="return popup_window(this, 'MediaView', 850, 680)" class="control" enter code hereid="GenericLink"></a> <img id="Any_71" alt="Media Source" src="images/9672web.gif" class="mediaStationIcons mediaWin"/> <img class="player" alt="Media Source" src="images/playmedia.gif" style="display: none;"/> </span> The href portion is generated on the backend, and I have no access to it. I need to modify some existing jquery code to do something based on what the 'onclick' function is(there are different ones e.g. popup_window1,popup_window2 etc.) . I tried something like this: $('.segmentLeft span.mediaSource').click(function(){ if ($('span:contains("popup_window")').length > 0) { do something } }); but it does not seem to work.

    Read the article

  • Logic: Best way to sample & count bytes of a 100MB+ file

    - by Jami
    Let's say I have this 170mb file (roughly 180 million bytes). What I need to do is to create a table that lists: all 4096 byte combinations found [column 'bytes'], and the number of times each byte combination appeared in it [column 'occurrences'] Assume two things: I can save data very fast, but I can update my saved data very slow. How should I sample the file and save the needed information? Here're some suggestions that are (extremely) slow: Go through each 4096 byte combinations in the file, save each data, but search the table first for existing combinations and update it's values. this is unbelievably slow Go through each 4096 byte combinations in the file, save until 1 million rows of data in a temporary table. Go through that table and fix the entries (combine repeating byte combinations), then copy to the big table. Repeat going through another 1 million rows of data and repeat the process. this is faster by a bit, but still unbelievably slow This is kind of like taking the statistics of the file. NOTE: I know that sampling the file can generate tons of data (around 22Gb from experience), and I know that any solution posted would take a bit of time to finish. I need the most efficient saving process

    Read the article

  • shell script segment to avoid overwriting files

    - by johndashen
    I have a perl script (or any executable) E which will take a file foo.xml and write a file foo.txt. I use a Beowulf cluster to run E for a large number of XML files, but I'd like to write a simple job server script in shell (bash) which doesn't overwrite existing txt files. I'm currently doing something like #!/bin/sh PATTERN="[A-Z]*0[1-2][a-j]"; # this matches foo in all cases todo=`ls *.xml | grep $PATTERN`; isdone=`ls *.foo | grep $PATTERN`; whatsleft=todo - isdone; # what's the unix magic? #and then call the job server; jobserve E "$whatsleft"; and then I don't know how to get the difference between $todo and $isdone. I'd prefer using sort/uniq to something like a for loop with grep inside, but I'm not sure how to do it (pipes? temporary files?) As a bonus question, is there a way to do lookahead search in bash grep?

    Read the article

  • Invalid table view update with insertRowsAtIndexPaths:

    - by Crystal
    I'm having trouble with insertRowsAtIndexPaths:. I'm not quite sure how it works. I watched the WWDC 2010 video on it, but I'm still getting an error. I thought I was supposed to update the model, then wrap the insertRowsAtIndexPaths: in the tableView beginUpdates and endUpdates calls. What I have is this: self.customTableArray = (NSMutableArray *)sortedArray; [_customTableView beginUpdates]; [tempUnsortedArray enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) { [sortedArray enumerateObjectsUsingBlock:^(id sortedObj, NSUInteger sortedIdx, BOOL *sortedStop) { if ([obj isEqualToString:sortedObj]) { NSIndexPath *newRow = [NSIndexPath indexPathForRow:sortedIdx inSection:0]; [_customTableView insertRowsAtIndexPaths:[NSArray arrayWithObject:newRow] withRowAnimation:UITableViewRowAnimationAutomatic]; *sortedStop = YES; } }]; }]; [_customTableView endUpdates]; customTableArray is my model array. sortedArray is just the sorted version of that array. When I run this code when I hit my plus button to add a new row, I get this error: Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid update: invalid number of rows in section 0. The number of rows contained in an existing section after the update (2) must be equal to the number of rows contained in that section before the update (1), plus or minus the number of rows inserted or deleted from that section (2 inserted, 0 deleted) and plus or minus the number of rows moved into or out of that section (0 moved in, 0 moved out).' I'm not sure what I'm doing wrong. Thoughts? Thanks.

    Read the article

  • Automating Excel through the PIA makes VBA go squiffy.

    - by Jon Artus
    I have absolutely no idea how to start diagnosing this, and just wondered if anyone had any suggestions. I'm generating an Excel spreadsheet by calling some Macros from a C# application, and during the generation process it somehow breaks. I've got a VBA class containing all of my logging/error-handling logic, which I instantiate using a singleton-esque accessor, shown here: Private mcAppFramework As csys_ApplicationFramework Public Function AppFramework() As csys_ApplicationFramework If mcAppFramework Is Nothing Then Set mcAppFramework = New csys_ApplicationFramework Call mcAppFramework.bInitialise End If Set AppFramework = mcAppFramework End Function The above code works fine before I've generated the spreadsheet, but afterwards fails. The problem seems to be the following line; Set mcAppFramework = New csys_ApplicationFramework which I've never seen fail before. If I add a watch to the variable being assigned here, the type shows as csys_ApplicationFramework/wksFoo, where wksFoo is a random worksheet in the same workbook. What seems to be happening is that while the variable is of the right type, rather than filling that slot with a new instance of my framework class, it's making it point to an existing worksheet instead, the equivalent of Set mcAppFramework = wksFoo which is a compiler error, as one might expect. Even more bizarrely, if I put a breakpoint on the offending line, edit the line, and then resume execution, it works. For example, I delete the word 'New' move off the line, move back, re-type 'New' and resume execution. This somehow 'fixes' the workbook and it works happily ever after, with the type of the variable in my watch window showing as csys_ApplicationFramework/csys_ApplicationFramework as I'd expect. This implies that manipulating the workbook through the PIA is somehow breaking it temporarily. All I'm doing in the PIA is opening the workbook, calling several macros using Excel.Application.Run(), and saving it again. I can post a few more details if anyone thinks that it's relevant. I don't know how VBA creates objects behind the scenes or how to debug this. I also don't know how the way the code executes can change without the code itself changing. As previously mentioned, VBA has frankly gone a bit squiffy on me... Any thoughts?

    Read the article

  • Common methods/implementation across multiple WCF Services

    - by Rob
    I'm looking at implementing some WCF Services as part of an API for 3rd parties to access data within a product I work on. There are currently a set of services exposed as "classic" .net Web Services and I need to emulate the behaviour of these, at least in part. The existing services all have an AcquireAuthenticationToken method that takes a set of parameters (username, password, etc) and return a session token (represented as a GUID), which is then passed in on calls to any other method (There's also a ReleaseAuthenticationToken method, no guesses needed as to what that does!). What I want to do is implement multiple WCF services, such as: ProductData UserData and have both of these services share a common implementation of Acquire/Release. From the base project that is created by VS2k8, it would appear I will start with, per service: public class ServiceName : IServiceName { } public interface IServiceName { } Therefore my questions would be: Will WCF tolerate me adding a base class to this, public class ServiceName : ServiceBase, IServiceName, or does the fact that there's an interface involved mean that won't work? If "No it won't work" to Question 1, could I change IServiceName so it extends another interface, IServiceBase, thus forcing the presence of Acquire/Release methods, but then having to supply the implementation in each service. Are 1 and 2 both really bad ideas and there's actually a much better solution that, knowing next to nothing about WCF, I just haven't thought of?

    Read the article

  • jQuery: responding to click event of element added to document after page load

    - by morpheous
    I am writing a page which uses a lot of in situ editing and updating using jQuery for AJAX. I have come accross a problem which can best be summarised by the workflow described below: Clicking on 'element1' on the page results in a jQuery AJAX POST Data is received in json format The data received in json format The received data is used to update an existing element 'results' in the page The received data is actual an HTML form I want jQuery to be responsible for POSTing the form when the form button is clicked The problem arises at point 6 above. I have code in my main page which looks like this: $(document).ready(function(){ $('img#inserted_form_btn').click(function(){ $.ajax({'type: 'POST', 'url': 'www.example.com', function($data){ $(data.id).html($data.frm); }), 'dataType': 'json'} }); }); However, the event is not being triggered. I think this is because when the document is first loaded, the img#inserted_form_btn element does not exist on the page (it is inserted into the DOM as the result of an element being clicked on the page (not shown in the code above - to keep the question short) My question therefore is: how can I get jQuery to be able to respond to events occuring in elements that were added to the DOM AFTER the page has loaded?

    Read the article

  • ASP.NET DynamicData: Whats happening during an update?

    - by Jens A.
    I am using ASP.NET DynamicData (based on LINQ to SQL) on my site for basic scaffolding. On one table I have added additional properties, that are not stored in the table, but are retrieved from somewhere else. (Profile information for a user account, in this case). They are displayed just fine, but when editing these values and pressing "Update", they are not changed. Here's what the properties look like, the table is the standard aspnet_Users table: public String Address { get { UserProfile profile = UserProfile.GetUserProfile(UserName); return profile.Address; } set { UserProfile profile = UserProfile.GetUserProfile(UserName); profile.Address = value; profile.Save(); } } When I fired up the debugger, I've noticed that for each update the set accessor is called three times. Once with the new value, but on a newly created instance of user, then once with the old value, again on an new instance, and finally with the old value on the existing instance. Wondering a bit, I checked with the properties created by the designer, and they, too, are called three times in (almost) the same fashion. The only difference is, that the last call contains the new value for the property. I am a bit stumped here. Why three times, and why are my new properties behaving differently? I'd be grateful for any help on that matter! =)

    Read the article

  • Job queueing and execute Mechanism

    - by Calm Storm
    In my webservice all method calls submits jobs to a queue. Basically these operations take long time to execute, so all these operations submit a Job to a queue and return a status saying "Submitted". Then the client keeps polling using another service method to check for the status of the job. Presently, what I do is create my own Queue, Job classes that are Serializable and persist these jobs (i.e, their serialized byte stream format) into the database. So an UpdateLogistics operation just queues up a "UpdateLogisticsJob" to the queue and returns. I have written my own JobExecutor which wakes up every N seconds, scans the database table for any existing jobs, and executes them. Note the jobs have to persisted because these jobs have to survive app-server crashes. This was done a long time ago, and I used bespoke classes for my Queues, Jobs, Executors etc. But now, I would like to know has someone done something similar before? In particular, Are there frameworks available for this ? Something in Spring/Apache etc Any framework that is easy to adapt/debug and plays well along with libraries like Spring will be great.

    Read the article

  • JQuery not removing added element

    - by Scott
    What I want to do is add and remove list items. I have got it to add new items to the list and I can remove existing ones but not the ones that have been added. It seem like it would work but it doesn't. Any help would be appreciated! Here the code: JQuery: <script type="text/javascript"> $(function(){ $('a#add').click(function(){ $('<li><a href="#" id="remove">--</a>List item</li>').appendTo('ul#list'); }); $('a#remove').click(function(){ $(this).parent().remove(); }); }); </script> HTML: <a href="#" id="add">Add List Item</a> <ul id="list"> <li><a href="#" id="remove">--</a> List item</li> <li><a href="#" id="remove">--</a> List item</li> <li><a href="#" id="remove">--</a> List item</li> <li><a href="#" id="remove">--</a> List item</li> </ul>

    Read the article

  • Difference between two PHP times as years, months and days for PHP version 5.2

    - by Dominor Novus
    Forward: I've scanned through the existing questions/answers on this matter. This is not a duplicitous question; I cannot find a working solution from the accepted answers. The main questions/answers I've reviewed can be found here: How to calculate the difference between two dates using PHP? What I need: A calucalation of the difference between two dates expressed as years, months and days that works with PHP version: 5.2. <?php $current_date = date('d-M-Y'); $future_date = '2012-11-01'; ?> What I've tried: Most answers I find online don't seem to exact in that they don't factor in leap years. This highly rated answer won't work because DateTime-diff() is php 5.3+. This accepted answer results in (i.e. the second block of code aimed at PHP 5.2) results in the following being parsed: Array ( [y] = 25 [m] = 11 [d] = 7 [h] = 3 [i] = 15 [s] = 19 [invert] = 0 [days] = 9473 ) Array ( [y] = 25 [m] = 11 [d] = 7 [h] = 3 [i] = 15 [s] = 19 [invert] = 1 [days] = 9473 ) I can't tell if I've incorrectly applied the code or it's simply a case of me not knowing how to manipulate the array.

    Read the article

  • Finding patterns of failure in a Unit Test

    - by Pekka
    I'm new to Unit Testing, and I'm only getting into the routine of building test suites. I have what is going to be a rather large project that I want to build tests for from the start. I'm trying to figure out general strategies and patterns for building test suites. When you look at a class, many tests come to you obviously due to the nature of the class. Say for a "user account" class with basic CRUD operations, being related to a database table, we will want to test - well, the CRUD. creating an object and seeing whether it exists query its properties change some properties change some properties to incorrect values and delete it again. As for how to break things, there are "fail" tests common to most CRUD classes like: Invalid input data types A number as the ID key that exceeds the range of the chosen data type Input in an incorrect character encoding Input that is too long And so on and so on. For a unit test concerned with file operations, the list of "breaking things" could be Invalid characters in file name File name too long File name uses incorrect protocol or path I'm pretty sure similar patterns - applicable beyond the unit test one is currently working on - can be found for most units that are being tested. Now my question is: Am I correct in seeing such "breaking patterns"? Or am I getting something completely wrong about Unit testing, and if I did it right, this wouldn't be an issue at all? Is Unit Testing as a process of finding as many ways to break the unit as possible the right way to go? If I am correct: Are there existing definitions, lists, cheat sheets for such patterns? Are there any provisions (mainly in PHPUnit, as that's the framework I'm working in) to automate such patterns? Is there any assistance - in the form of check lists, or software - to aid in writing complete tests?

    Read the article

  • Reading input files in FORTRAN

    - by lollygagger
    Purpose: Create a program that takes two separate files, opens and reads them, assigns their contents to arrays, do some math with those arrays, create a new array with product numbers, print to a new file. Simple enough right? My input files have comment characters at the beginning. One trouble is, they are '#' which are comment characters for most plotting programs, but not FORTRAN. What is a simple way to tell the computer not to look at these characters? Since I have no previous FORTRAN experience, I am plowing through this with two test files. Here is what I have so far: PROGRAM gain IMPLICIT NONE REAL, DIMENSION (1:4, 1:8) :: X, Y, Z OPEN(1, FILE='test.out', & STATUS='OLD', ACTION='READ') ! opens the first file READ(1,*), X OPEN(2, FILE='test2.out', & STATUS='OLD', ACTION='READ') ! opens the second file READ(2,*), Y PRINT*, X, Y Z = X*Y ! PRINT*, Z OPEN(3, FILE='test3.out', STATUS='NEW', ACTION='WRITE') !creates a new file WRITE(3,*), Z CLOSE(1) CLOSE(2) CLOSE(3) END PROGRAM PS. Please do not overwhelm me with a bunch of code monkey gobblety gook. I am a total programming novice. I do not understand all the lingo, that is why I came here instead of searching for help in existing websites. Thanks.

    Read the article

  • Adding new elements into DOM using JavaScript (appendChild)

    - by KatieK
    I sometimes need to add elements (such as a new link and image) to an existing HTML page, but I only have access to a small portion of the page far from where I need to insert elements. I want to use DOM based JavaScript techniques, and I must avoid using document.write(). Thus far, I've been using something like this: // Create new image element var newImg = document.createElement("img"); newImg.src = "images/button.jpg"; newImg.height = "50"; newImg.width = "150"; newImg.alt = "Click Me"; // Create new link element var newLink = document.createElement("a"); newLink.href = "/dir/signup.html"; // Append new image into new link newLink.appendChild(newImg); // Append new link (with image) into its destination on the page document.getElementById("newLinkDestination").appendChild(newLink); Is there a more efficient way that I could use to accomplish the same thing? It all seems necessary, but I'd like to know if there's a better way I could be doing this. Thanks!

    Read the article

  • How can flash call jquery function in its event

    - by user2955639
    I want jquery to do something during some of the events when an audio is playing. So I'm coding a function like this <script> $.fn.playMedia = function(options){ var opts = $.extend({}, { swfSrc: '' timeUpdated: function(currentTime){}, startPlay: function(){}, endPlay: function(){} }, options); return $(this).each(function(){ // call flash to play the media whose src is opts.swfSrc // Is it possible that flash can call the js functions(opts.timeUpdate, opts.startPlay and opts.endPlay) at each time of the event is triggered? }); }}; </script> // Usage <div id="player"></div> <script> $('#player').playMedia({ swfSrc: '/path/song.mp3', timeUpdated: function(currentTime){ comsole.log(currentTime); } }); </script> I'm a totally layman of flash, I just guess this works. Hope someone could tell me how to make up a swf file for this jquery function. Or is there any existing jquery plugin which does this thing but can re-design apperance flexibly. Thank you very much!

    Read the article

  • Getting a stream back from a .Net remoting service that is accessible with IP v4 and v6

    - by jon.ediger
    My company has an existing .Net Remoting service that listens on a port, fronting interfaces used by external systems. This all works great with IP v4 based communications. However, this service now needs to support both IP v4 communications and IP v6 communications. I have found info that the system.runtime.remoting section of the app.config should include two channels as follows: <channel ref="tcp" name="tcp6" port="9000" bindTo="[::]" /> <channel ref="tcp" name="tcp4" port="9000" bindTo="0.0.0.0" /> The above config file changes to the System.Runtime.Remoting config section will get the remoting service responding to non-stream functions on both ip v4 and ip v6. The issue comes only when attempting to get a stream back, used to upload or download large files. In this case, instead of getting a usable stream back, the following ArgumentException is thrown instead: IPv4 address 0.0.0.0 and IPv6 address ::0 are unspecified addresses that cannot be used as a target address. Parameter name: hostNameOrAddress Is there a way to modify the app.config (in the system.runtime.remoting section, or another section) so that the service will return a stream mapped to a real ip so the client can actually upload/download files while maintaining the ability to use both IP v4 and IP v6?

    Read the article

  • Identity alternative for SQL Azure Federation : are Azure Queues or Service Bus Queues a good choice?

    - by JYL
    As many of developers, I'm looking for a way to integrate my existing app to SQL Azure Federations, and replacing the Identity columns (the primary keys of my tables) is a big problem. For many reasons, I do NOT want use GUID for my primary keys (please don't open the debate about the GUID or not, it's not my question : i just don't want a GUID, period). So I need to build a key provider to replace the "identity" feature of a standard SQL database. I'm using Entity Framework, so i can easily find one place to set the Id value just before the insert (by overriding the SaveChanges method of my ObjectContext class). I just need to find a "not too complicated" implementation for getting the current Id, which is "farm-ready". I've read this SO post : "ID Generation for Sharded Database (Azure Federated Database)" and "Synchronizing Multiple Nodes in Windows Azure from MSDN Magazine", but this solution sounds a bit complicated for me. I'm thinking about creating (automatically) one azure queue for each SQL table, which contain a pre-loaded list of consecutive integer. When I want an Id value, I just have to get a message from the queue (which becomes invisible and is deleted on the way), which give me the current available Id. About the choice between "Windows Azure Queues" and "Windows Azure Service Bus Queues", I prefere "Windows Azure Queues", due to the "high" latency of Service Bus Queues. I don't think that the lack of "ordering garantee" of Azure Queues is a problem. What do you think about that idea of using Azure Queues to provide Id values ? Do you see any argument to give up that idea ? Do you have a better idea, or even a good practice, to provider integer ids in SQL Azure Federation databases ? Thanks.

    Read the article

  • WordPress update_post_meta values. Delete when empty or just test for ""?

    - by Scott B
    My function below, will take the values from my custom meta fields (after a post has been edited, and save or publish has been clicked) and update or insert the posted meta values. However, if the user leaves this field blank, I believe I want to delete the meta altogether (so I can test for its presence and display accordingly vs just checking for ""). For example, one of my meta options gives the user the ability to add a Custom title to their post, which when present, will populate the page's tag. However, if the field is left empty, I want to default the tag to the_title(), which is simply the Post title used to identify the page/post. Since I'm not deleting the meta on save, its always present after the first time a user enters something in there, get_post_meta($post-ID,'MyCustomTitle', true) is always true. Further, they cannot blank it out by clearing the title field and hitting publish. What am I missing in the save in order to clear the value to "" when the user clears the field? if ($_POST['MyCustomTitle']) { update_custom_meta($postID, $_POST['MyCustomTitle'], 'MyCustomTitle'); } function update_custom_meta($postID, $newvalue, $field_name) { // To create new meta if(!get_post_meta($postID, $field_name)){ add_post_meta($postID, $field_name, $newvalue); }else{ // or to update existing meta update_post_meta($postID, $field_name, $newvalue); } }

    Read the article

  • MySQL Datefields: duplicate or calculate?

    - by Konerak
    We are using a table with a structure imposed upon us more than 10 years ago. We are allowed to add columns, but urged not to change existing columns. Certain columns are meant to represent dates, but are put in different format. Amongst others: * CHAR(6): YYMMDD * CHAR(6): DDMMYY * CHAR(8): YYYYMMDD * CHAR(8): DDMMYYYY * DATE * DATETIME Since we now would like to do some more complex queries, using advanced date functions, my manager proposed to d*uplicate those problem columns* to a proper FORMATTED_OLDCOLUMNNAME column using a DATE or DATETIME format. Is this the way to go? Couldn't we just use the STR_TO_DATE function each time we accessed the columns? To avoid every query having to copy-paste the function, I could still work with a view or a stored procedure, but duplicating data to avoid recalculation sounds wrong. Solutions I see (I guess I prefer 2.2.1) 1. Physically duplicate columns 1.1 In the same table 1.1.1 Added by each script that does a modification (INSERT/UPDATE/REPLACE/...) 1.1.2 Maintained by a trigger on each modification 1.2 In a separate table 1.2.1 Added by each script that does a modification (INSERT/UPDATE/REPLACE/...) 1.2.2 Maintained by a trigger on each modification 2. On-demand transformation 2.1 Each query has to perform the transformation 2.1.1 Using copy-paste in the source code 2.1.2 Using a library 2.1.3 Using a STORED PROCEDURE 2.2 A view performs the transformation 2.2.1 A separate table replacing the entire table 2.2.2 A separate table just adding the date-fields for the primary keys Am I right to say it's better to recalculate than to store? And would a view be a good solution?

    Read the article

  • What happens to exinsting workspaces after upgrading to TFS 2010

    - by user351671
    Hi, I was looking for some insight about what happens to existing workspaces and files that are already checked-out on people, after an upgrade to TFS2010. Surprisingly enough I can not find any satisfactory information on this. (I am talking about upgrading on new hardware by the way. Fresh TFS instance, upgraded databases) I've checked TFS Installation guide, I searched through the web, all I could find is upgrade scenarios for the server side. Nobody even mentions what happens to source control clients. I've created a virtual machine to test the upgrade process, The upgrade was successful and all my files and workspaces exist in the new server too. The problem is: The new TFS installation has a new instanceID. When I redirected on the clients to the new server, the client seemed unable to match files and file states in the workspace with the ones on the new server. This makes me wonder if it will be possible to keep working after the production upgrade. As I mentioned above I can not find anything on this, it would be great if anyone could point me to some paper or blog post about this. Thanks in advance...

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >