Search Results

Search found 17501 results on 701 pages for 'stored functions'.

Page 571/701 | < Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >

  • JPA - Entity design problem

    - by Yatendra Goel
    I am developing a Java Desktop Application and using JPA for persistence. I have a problem mentioned below: I have two entities: Country City Country has the following attribute: CountryName (PK) City has the following attribute: CityName Now as there can be two cities with same name in two different countries, the primaryKey for City table in the datbase is a composite primary key composed of CityName and CountryName. Now my question is How to implement the primary key of the City as an Entity in Java @Entity public class Country implements Serializable { private String countryName; @Id public String getCountryName() { return this.countryName; } } @Entity public class City implements Serializable { private CityPK cityPK; private Country country; @EmbeddedId public CityPK getCityPK() { return this.cityPK; } } @Embeddable public class CityPK implements Serializable { public String cityName; public String countryName; } Now as we know that the relationship from Country to City is OneToMany and to show this relationship in the above code, I have added a country variable in City class. But then we have duplicate data(countryName) stored in two places in the City class: one in the country object and other in the cityPK object. But on the other hand, both are necessary: countryName in cityPK object is necessary because we implement composite primary keys in this way. countryName in country object is necessary because it is the standard way of showing relashionship between objects. How to get around this problem?

    Read the article

  • How to determine the size of a project (lines of code, function points, other)

    - by sixtyfootersdude
    How would you evaluate project size? Part A: Before you start a project. Part B: For a complete project. I am interested in comparing unrelated projects. Here are some options: 1) Lines of code. I know that this is not a good metric of productivity but is this a reasonable measure of project size? If I wanted to estimate how long it would take to recreate a project would this be a reasonable way to do it? How many lines of code should I estimate a day? 2) Function Points. Functions points are defined as the number of: inputs outputs inquires internal files external interfaces Anyone have a veiw point on whether this is a good measure? Is there a way to **actually do this? Does anyone have another solution? Hours taken seems like it could be a useful metric but not solely. If I ask you what is a "bigger program" and give you two programs how would you approach the question? I have seen several discussions of this on stackover flow but most discuss how to measure programmer productivity. I am more interested in project size.

    Read the article

  • How to upload files and store them in a server local path when MS SQL SERVER allows remote connectio

    - by user193655
    I am developing a win32 windows application with Delphi and MS SQL Server. it works fine in LAN but I am trying to add the support for SQL Server remote connections (= working with a DB that can be accessed with an external IP, as described in this article: http://support.microsoft.com/default.aspx?scid=kb;EN-US;914277). Basically I have a Table in DB where I keep the DocumentID, the document description and the Document path (like \FILESERVER\MyApplicationDocuments\45.zip). Of course \FILESERVER is a local (LAN) path for the server but not for the client (as I am now trying to add the support for remote connections). So I need a way to access \FILESERVER even if of course I cannot see it in LAN. I found the following T-SQL code snippet that is perfect for the "download trick": SELECT BulkColumn as MyFile FROM OPENROWSET(BULK '\FILESERVER\MyApplicationDocuments\45.zip' , SINGLE_BLOB) AS X With the code above I can download a file on the client. But how to upload it? I need an "Uppload trick" to be able to insert new files, but also to delete or replace existing files. Can anyone suggest? If a trick is not available could you suggest an alternative? Like an extended stored procedure or calling some .net assembly from the server.

    Read the article

  • Windows Azure - access webrole local storage from separate workerrole

    - by Brett Smith
    I'm running an application on windows azure, the MVC views need to be dynamic, I started by storing them as records in the database, but am quite keen to move them to a physical location. My concept was to create the physical file via code... which worked great and speeds up the page load dramatically. This was of course before I realised that the files were only available for the duration of the role Next I looked at a start up task to create the files when the role was started - however I then realised that any separate instances weren't going to sync up unless I monitored the database for changes. So I moved from a start up task to a function in the run method of the role that checks the database every 10 minutes to see if changes have occurred. The problem is that this seems to choke up the application (at least in the warm up stage). Ideally I would like to move the run function to it's own worker role that can sit there and push files out to web role instances, but I'm unsure on how I would go about accessing the web roles local storage from the worker role. Can anybody tell me whether this is actually possible? and hopefully point me in the right direction to achieve this? Just to clarify what I'm trying to achieve -View is created in user interface running on web role and stored in database -Separate web role (front end) has clientside application with virtualpath provider pointing Views requests to local storage (localresource) -separate worker role to create View structure and load this into clientside web role local storage

    Read the article

  • C# creating a Class, having objects as member variables? I think the objects are garbage collecte

    - by Bryan
    So I have a class that has the following member variables. I have get and set functions for every piece of data in this class. public class NavigationMesh { public Vector3 node; int weight; bool isWall; bool hasTreasure; public NavigationMesh(int x, int y, int z, bool setWall, bool setTreasure) { //default constructor //Console.WriteLine(x + " " + y + " " + z); node = new Vector3(x, y, z); //Console.WriteLine(node.X + " " + node.Y + " " + node.Z); isWall = setWall; hasTreasure = setTreasure; weight = 1; }// end constructor public float getX() { Console.WriteLine(node.X); return node.X; } public float getY() { Console.WriteLine(node.Y); return node.Y; } public float getZ() { Console.WriteLine(node.Z); return node.Z; } public bool getWall() { return isWall; } public void setWall(bool item) { isWall = item; } public bool getTreasure() { return hasTreasure; } public void setTreasure(bool item) { hasTreasure = item; } public int getWeight() { return weight; } }// end class In another class, I have a 2-Dim array that looks like this NavigationMesh[,] mesh; mesh = new NavigationMesh[502,502]; I use a double for loop to assign this, my problem is I cannot get the data I need out of the Vector3 node object after I create this object in my array with my "getters". I've tried making the Vector3 a static variable, however I think it refers to the last instance of the object. How do I keep all of these object in memory? I think there being garbage collected. Any thoughts?

    Read the article

  • Copying contents of a MySQL table to a table in another (local) database

    - by Philip Eve
    I have two MySQL databases for my site - one is for a production environment and the other, much smaller, is for a testing/development environment. Both have identical schemas (except when I am testing something I intend to change, of course). A small number of the tables are for internationalisation purposes: TransLanguage - non-English languages TransModule - modules (bundles of phrases for translation, that can be loaded individually by PHP scripts) TransPhrase - individual phrases, in English, for potential translation TranslatedPhrase - translations of phrases that are submitted by volunteers ChosenTranslatedPhrase - screened translations of phrases. The volunteers who do translation are all working on the production site, as they are regular users. I wanted to create a stored procedure that could be used to synchronise the contents of four of these tables - TransLanguage, TransModule, TransPhrase and ChosenTranslatedPhrase - from the production database to the testing database, so as to keep the test environment up-to-date and prevent "unknown phrase" errors from being in the way while testing. My first effort was to create the following procedure in the test database: CREATE PROCEDURE `SynchroniseTranslations` () LANGUAGE SQL NOT DETERMINISTIC MODIFIES SQL DATA SQL SECURITY DEFINER BEGIN DELETE FROM `TransLanguage`; DELETE FROM `TransModule`; INSERT INTO `TransLanguage` SELECT * FROM `PRODUCTION_DB`.`TransLanguage`; INSERT INTO `TransModule` SELECT * FROM `PRODUCTION_DB`.`TransModule`; INSERT INTO `TransPhrase` SELECT * FROM `PRODUCTION_DB`.`TransPhrase`; INSERT INTO `ChosenTranslatedPhrase` SELECT * FROM `PRODUCTION_DB`.`ChosenTranslatedPhrase`; END When I try to run this, I get an error message: "SELECT command denied to user 'username'@'localhost' for table 'TransLanguage'". I also tried to create the procedure to work the other way around (that is, to exist as part of the data dictionary for the production database rather than the test database). If I do it that, way, I get an identical message except it tells me I'm denied the DELETE command rather than SELECT. I have made sure that my user has INSERT, DELETE, SELECT, UPDATE and CREATE ROUTINE privileges on both databases. However, it seems as though MySQL is reluctant to let this user exercise its privileges on both databases at the same time. How come, and is there a way around this?

    Read the article

  • Reverse search in Hibernate Search

    - by Javi
    Hello, I'm using Hibernate Search (which uses Lucene) for searching some Data I have indexed in a directory. It works fine but I need to do a reverse search. By reverse search I mean that I have a list of queries stored in my database I need to check which one of these queries match with a Data object each time Data Object is created. I need it to alert the user when a Data Object matches with a Query he has created. So I need to index this single Data Object which has just been created and see which queries of my list has this object as a result. I've seen Lucene MemoryIndex Class to create an index in memory so I can do something like this example for every query in a list (though iterating in a Java list of queries would not be very efficient): //Iterating over my list<Query> MemoryIndex index = new MemoryIndex(); //Add all fields index.addField("myField", "myFieldData", analyzer); ... QueryParser parser = new QueryParser("myField", analyzer); float score = index.search(query); if (score > 0.0f) { System.out.println("it's a match"); } else { System.out.println("no match found"); } The problem here is that this Data Class has several Hibernate Search Annotations @Field,@IndexedEmbedded,... which indicated how fields should be indexed, so when I invoke index() method on the FullTextEntityManager instance it uses this information to index the object in the directory. Is there a similar way to index it in memory using this information? Is there a more efficient way of doing this reverse search? Thanks

    Read the article

  • Thread-safe data structure design

    - by Inso Reiges
    Hello, I have to design a data structure that is to be used in a multi-threaded environment. The basic API is simple: insert element, remove element, retrieve element, check that element exists. The structure's implementation uses implicit locking to guarantee the atomicity of a single API call. After i implemented this it became apparent, that what i really need is atomicity across several API calls. For example if a caller needs to check the existence of an element before trying to insert it he can't do that atomically even if each single API call is atomic: if(!data_structure.exists(element)) { data_structure.insert(element); } The example is somewhat awkward, but the basic point is that we can't trust the result of "exists" call anymore after we return from atomic context (the generated assembly clearly shows a minor chance of context switch between the two calls). What i currently have in mind to solve this is exposing the lock through the data structure's public API. This way clients will have to explicitly lock things, but at least they won't have to create their own locks. Is there a better commonly-known solution to these kinds of problems? And as long as we're at it, can you advise some good literature on thread-safe design? EDIT: I have a better example. Suppose that element retrieval returns either a reference or a pointer to the stored element and not it's copy. How can a caller be protected to safely use this pointer\reference after the call returns? If you think that not returning copies is a problem, then think about deep copies, i.e. objects that should also copy another objects they point to internally. Thank you.

    Read the article

  • How do you store accented characters coming from a web service into a database?

    - by Thierry Lam
    I have the following word that I fetch via a web service: André From Python, the value looks like: "Andr\u00c3\u00a9". The input is then decoded using json.loads: >>> import json >>> json.loads('{"name":"Andr\\u00c3\\u00a9"}') >>> {u'name': u'Andr\xc3\xa9'} When I store the above in a utf8 MySQL database, the data is stored like the following using Django: SomeObject.objects.create(name=u'Andr\xc3\xa9') Querying the name column from a mysql shell or displaying it in a web page gives: André The web page displays in utf8: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> My database is configured in utf8: mysql> SHOW VARIABLES LIKE 'collation%'; +----------------------+-----------------+ | Variable_name | Value | +----------------------+-----------------+ | collation_connection | utf8_general_ci | | collation_database | utf8_unicode_ci | | collation_server | utf8_unicode_ci | +----------------------+-----------------+ 3 rows in set (0.00 sec) mysql> SHOW VARIABLES LIKE 'character_set%'; +--------------------------+----------------------------+ | Variable_name | Value | +--------------------------+----------------------------+ | character_set_client | utf8 | | character_set_connection | utf8 | | character_set_database | utf8 | | character_set_filesystem | binary | | character_set_results | utf8 | | character_set_server | utf8 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | +--------------------------+----------------------------+ 8 rows in set (0.00 sec) How can I retrieve the word André from a web service, store it properly in a database with no data loss and display it on a web page in its original form?

    Read the article

  • Oracle DBMS_PROFILER only shows Anonymous in the results tables

    - by Greg Reynolds
    I am new to DBMS_PROFILER. All the examples I have seen use a simple top-level procedure to demonstrate the use of the profiler, and from there get all the line numbers etc. I deploy all code in packages, and I am having great difficulty getting my profile session to populate the plsql_profiler_units with useful data. Most of my runs look like this: RUNID RUN_COMMENT UNIT_OWNER UNIT_NAME SECS PERCEN ----- ----------- ----------- -------------- ------- ------ 5 Test <anonymous> <anonymous> .00 2.1 Profiler 5 Test <anonymous> <anonymous> .00 2.1 Profiler 5 Test <anonymous> <anonymous> .00 2.1 Profiler I have just embedded the calls to the dbms_profiler.start_profiler, flush_data and stop_profiler as per all the examples. The main difference is that my code is in a package, and calls in to other package. Do you have to profile every single stored procedure in your call stack? If so that makes this tool a little useless! I have checked http://www.dba-oracle.com/t_plsql_dbms_profiler.htm for hints, among other similar sites.

    Read the article

  • IIS Restrict Access to Directory for table of users

    - by Dave
    I am trying to restrict access to files in a directory and it's sub directories based user rights. My user rights are stored in an MS SQL database in a custom format, however it is easy to query the list of users with rights to this directory. I need to know how to apply this to a web config on the server to authenticate against a query of a database table to determine if the username is authenticated and allowed to view the file. Of course if they are not they should be blocked / given a 404. I am using IIS and ASP.Net MVC3 with a form based security as opposed to the built in roles and responsibilities that was custom made for us and that works great. There are over 10k users tied to this non-Active Directory authentication so I am not planning to change my authentication type so please don't go there. It is not my decision on the choice of platform, or I would have gone with a LAMP server and been done with this. Edit 11-13-2012 @ 8:57a: In the web config can you put the result of an SQL query?

    Read the article

  • How to extract part of the path and the ending file name with Regex?

    - by brasofilo
    I need to build an associative array with the plugin name and the language file it uses in the following sequence: /whatever/path/length/public_html/wp-content/plugins/adminimize/languages/adminimize-en_US.mo /whatever/path/length/public_html/wp-content/plugins/audio-tube/lang/atp-en_US.mo /whatever/path/length/public_html/wp-content/languages/en_US.mo /whatever/path/length/public_html/wp-content/themes/twentyeleven/languages/en_US.mo Those are the language files WordPress is loading. They are all inside /wp-content/, but with variable server paths. I'm looking only for those inside the plugins folder, grab the plugin folder name and the filename. Hipothetical case in PHP, where reg_extract_* functions are the parts I'm missing: $plugins = array(); foreach( $big_array as $item ) { $folder = reg_extract_folder( $item ); if( 'plugin' == $folder ) { // "folder-name-after-plugins-folder" $plugin_name = reg_extract_pname( $item ); // "ending-mo-file.mo" $file_name = reg_extract_fname( $item ); $plugins[] = array( 'name' => $plugin_name, 'file' => $file_name ); } } [update] Ok, so I was missing quite a basic function, pathinfo... :/ No problem to detect if /plugins/ is contained in the array. But what about the plugin folder name?

    Read the article

  • Using Python to get a CSV output for the following example.

    - by Az
    Hi there, I'm back again with my ongoing saga of Student-Project Allocation questions. Thanks to Moron (who does not match his namesake) I've got a bit of direction for an evaluation portion of my project. Going with the idea of the Assignment Problem and Hungarian Algorithm I would like to express my data in the form of a .csv file which would end up looking like this in spreadsheet form. This is based on the structure I saw here. | | Project 1 | Project 2 | Project 3 | |----------|-----------|-----------|-----------| |Student1 | | 2 | 1 | |----------|-----------|-----------|-----------| |Student2 | 1 | 2 | 3 | |----------|-----------|-----------|-----------| |Student3 | 1 | 3 | 2 | |----------|-----------|-----------|-----------| To make it less cryptic: the rows are the Students/Agents and the columns represent Projects/Task. Obviously ONE project can be assigned to ONE student. That, in short, is what my project is about. The fields represent the preference weights the students have placed upon the projects (ranging from 1 to 10). If blank, that student does not want that project and there's no chance of him/her being assigned such. Anyway, my data is stored within dictionaries. Specifically the students and projects dictionaries such that: students[student_id] = Student(student_id, student_name, alloc_proj, alloc_proj_rank, preferences) where preferences is in the form of a dictionary such that preferences[rank] = {project_id} and projects[project_id] = Project(project_id, project_name) I'm aware that sorted(students.keys()) will give me a sorted list of all the student IDs which will populate the row labels and sorted(projects.keys()) will give me the list I need to populate the column labels. Thus for each student, I'd go into their preferences dictionary and match the applicable projects to ranks. I can do that much. Where I'm failing is understanding how to create a .csv file. Any help, pointers or good tutorials will be highly appreciated.

    Read the article

  • need help fixing unique key in rails. rails is adding id causing duplicate key

    - by railsnew
    I need some help in fixing the below issue. I had transaction blocks in my rails code like below: @sqlcontact = "INSERT INTO contacts (id,\"cid\", \"hphone\", mphone, provider, cemail, email, sms , mail, phone) VALUES ('"+@id1+"','" + @id1 + "', '"+ params[:hphone] + "', '"+params[:mphone]+ "', '" + params[:provider] + "', '" + params[:cemail]+ "', '" + @varemail+ "', '"+@varsms+ "', '"+ @varmail+"', '"+@varphone+"')" my app was deployed to heroku so I was advised by them to remove transaction blocks. So I changed the above to: @cont = Contact.new(:id => @id1, :cid => @id1, :hphone => params[:hphone], :mphone => params[:mphone], :provider => params[:provider], :cemail => params[:cemail], :email => @varemail, :sms => @varsms, :mail => @varmail, :phone => @varphone) @cont.save My app also already had data stored. Now the problem is that when I try to save a record ...I keep getting the error: duplicate key value violates unique constraint "contacts_pkey" The error also shows the sql query trying to insert data ...however, in that sql query i Do not see id value. As you can see from my code that I am passing the id. then why is rails not accepting it? does it always include its own sequential id? can I not overwrite the default rails magic? and if it does that...does it not look at data that is already in the DB?? I am really stuck here. What should I do? should I just go back to my transaction block

    Read the article

  • Using memory-based cache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Return unordered list from hierarchical sql data

    - by Milan
    I have table with pageId, parentPageId, title columns. Is there a way to return unordered nested list using asp.net, cte, stored procedure, UDF... anything? Table looks like this: PageID ParentId Title 1 null Home 2 null Products 3 null Services 4 2 Category 1 5 2 Category 2 6 5 Subcategory 1 7 5 SubCategory 2 8 6 Third Level Category 1 ... Result should look like this: Home Products Category 1 SubCategory 1 Third Level Category 1 SubCategory 2 Category 2 Services Ideally, list should contain <a> tags as well, but I hope I can add it myself if I find a way to create <ul> list. EDIT 1: I thought that already there is a solution for this, but it seems that there isn't. I wanted to keep it simple as possible and to escape using ASP.NET menu at any cost, because it uses tables by default. Then I have to use CSS Adapters etc. Even if I decide to go down the "ASP.NET menu" route I was able to find only this approach: http://aspalliance.com/822 which uses DataAdapter and DataSet :( Any more modern or efficient way?

    Read the article

  • Ruby Design Problem for SQL Bulk Inserter

    - by crunchyt
    This is a Ruby design problem. How can I make a reusable flat file parser that can perform different data scrubbing operations per call, return the emitted results from each scrubbing operation to the caller and perform bulk SQL insertions? Now, before anyone gets narky/concerned, I have written this code already in a very unDRY fashion. Which is why I am asking any Ruby rockstars our there for some assitance. Basically, everytime I want to perform this logic, I create two nested loops, with custom processing in between, buffer each processed line to an array, and output to the DB as a bulk insert when the buffer size limit is reached. Although I have written lots of helpers, the main pattern is being copy pasted everytime. Not very DRY! Here is a Ruby/Pseudo code example of what I am repeating. lines_from_file.each do |line| line.match(/some regex/).each do |sub_str| # Process substring into useful format # EG1: Simple gsub() call # EG2: Custom function call to do complex scrubbing # and matching, emitting results to array # EG3: Loop to match opening/closing/nested brackets # or other delimiters and emit results to array end # Add processed lines to a buffer as SQL insert statement @buffer << PREPARED INSERT STATEMENT # Flush buffer when "buffer size limit reached" or "end of file" if sql_buffer_full || last_line_reached @dbc.insert(SQL INSERTS FROM BUFFER) @buffer = nil end end I am familiar with Proc/Lambda functions. However, because I want to pass two separate procs to the one function, I am not sure how to proceed. I have some idea about how to solve this, but I would really like to see what the real Rubyists suggest? Over to you. Thanks in advance :D

    Read the article

  • Generic property- specify the type at runtime

    - by Lirik
    I was reading a question on making a generic property, but I'm a little confused at by the last example from the first answer (I've included the relevant code below): You have to know the type at compile time. If you don't know the type at compile time then you must be storing it in an object, in which case you can add the following property to the Foo class: public object ConvertedValue { get { return Convert.ChangeType(Value, Type); } } That's seems strange: it's converting the value to the specified type, but it's returning it as an object when the value was stored as an object. Doesn't the returned object still require un-boxing? If it does, then why bother with the conversion of the type? I'm also trying to make a generic property whose type will be determined at run time: public class Foo { object Value {get;set;} Type ValType{get;set;} Foo(object value, Type type) { Value = value; ValType = type; } // I need a property that is actually // returned as the specified value type... public object ConvertedValue { get { return Convert.ChangeType(Value, ValType); } } } Is it possible to make a generic property? Does the return property still require unboxing after it's accessed?

    Read the article

  • handle large Parcelable ArrayList in Android

    - by Gal Ben-Haim
    I'm developing an Android app that is a client to a JSON webservice API. I have classes of resource objects (some are nested) and I pass results from an IntentService that access the webserive using the Parcelable interface for all the resource classes. the webservice returns arrays or results that can be potentially large (because of the nesting, for example, a post object also contains comments array, each comment also contains a user object). currently I'm either inserting the results into a SQlite database or displaying them in a ListView. (my relevant methods are accepting ArrayList<resourceClass> as arguments). (some data need to be persistent stored and some should not). since I don't know what size of lists I can handle this way without reaching the memory limits, is this a good practice ? is it a better idea to save the parsed JSON to a local file immediately and pass the file path to the ResultReceiver, then either insert to database from that file or display the data ? is there a better way to handle this ? btw - I'm parsing the JSON as a stream with Gson's Reader so there shouldn't be memory issues at that stage.

    Read the article

  • Returning objects from another thread?

    - by Mark
    Trying to follow the hints laid out here, but she doesn't mention how to handle it when your collection needs to return a value, like so: private delegate TValue DequeueHandler(); public virtual TValue Dequeue() { if (dispatcher.CheckAccess()) { --count; var pair = dict.First(); var queue = pair.Value; var val = queue.Dequeue(); if (queue.Count == 0) dict.Remove(pair.Key); OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Remove, val)); return val; } else { dispatcher.BeginInvoke(new DequeueHandler(Dequeue)); } } This obviously won't work, because dispatcher.BeginInvoke doesn't return anything. What am I supposed to do? Or maybe I could replace dequeue with two functions, Peek and Pop, where Peek doesn't really need to be on the UI thread because it doesn't modify anything, right? As a side question, these methods don't need to be "locked" either, do they? If they're all forced to run in the UI thread, then there shouldn't be any concurrency issues, right?

    Read the article

  • Load SQL query result data into cache in advance

    - by Marc
    I have the following situation: .net 3.5 WinForm client app accessing SQL Server 2008 Some queries returning relatively big amount of data are used quite often by a form Users are using local SQL Express and restarting their machines at least daily Other users are working remotely over slow network connections The problem is that after a restart, the first time users open this form the queries are extremely slow and take more or less 15s on a fast machine to execute. Afterwards the same queries take only 3s. Of course this comes from the fact that no data is cached and must be loaded from disk first. My question: Would it be possible to force the loading of the required data in advance into SQL Server cache? Note My first idea was to execute the queries in a background worker when the application starts, so that when the user starts the form the queries will already be cached and execute fast directly. I however don't want to load the result of the queries over to the client as some users are working remotely or have otherwise slow networks. So I thought just executing the queries from a stored procedure and putting the results into temporary tables so that nothing would be returned. Turned out that some of the result sets are using dynamic columns so I couldn't create the corresponding temp tables and thus this isn't a solution. Do you happen to have any other idea?

    Read the article

  • Java SWT: wrapping syncExec and asyncExec to clean up code

    - by jonescb
    I have a Java Application using SWT as the toolkit, and I'm getting tired of all the ugly boiler plate code it takes to update a GUI element. Just to set a disabled button to be enabled I have to go through something like this: shell.getDisplay().asyncExec(new Runnable() { public void run() { buttonOk.setEnabled(true); } }); I prefer keeping my source code as flat as I possibly can, but I need a whopping 3 indentation levels just to do something simple. Is there some way I can wrap it? I would like a class like: public class UIUpdater { public static void updateUI(Shell shell, *function_ptr*) { shell.getDisplay().asyncExec(new Runnable() { public void run() { //Execute function_ptr } }); } } And can be used like so: UIUpdater.updateUI(shell, buttonOk.setEnabled(true)); Something like this would be great for hiding that horrible mess SWT seems to think is necessary to do anything. As I understand it, Java cannot do functions pointers. But Java 7 will have something called Closures which should be what I want. But in the meantime is there anything at all I can do to pass a function pointer or callback to another function to be executed? As an aside, I'm starting to think it'd be worth the effort to redo this application in Swing, and I don't have to put up with this ugly crap and non-cross-platformyness of SWT.

    Read the article

  • Mmap and structure

    - by blid..pl
    I'm working some code including communication between processes, using semaphores. I made structure like this: typedef struct container { sem_t resource, mutex; int counter; } container; and use in that way (in main app and the same in subordinate processes) container *memory; shm_unlink("MYSHM"); //just in case fd = shm_open("MYSHM", O_RDWR|O_CREAT|O_EXCL, 0); if(fd == -1) { printf("Error"); exit(EXIT_FAILURE); } memory = mmap(NULL, sizeof(container), PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); ftruncate(fd, sizeof(container)); Everything is fine when I use one of the sem_ functions, but when I try to do something like memory->counter = 5; It doesn't work. Probably I got something wrong with pointers, but I tried almost everything and nothing seems to work. Maybe there's a better way to share variables, structures etc between processes ? Unfortunately I'm not allowed to use boost or something similiar, the code is for educational purposes and I'm intentend to keep as simple as it's possible.

    Read the article

  • TransactionScope question - how can I keep the DTC from getting involved in this?

    - by larryq
    (I know the circumstances surrounding the DTC and promoting a transaction can be a bit mysterious to those of us not in the know, but let me show you how my company is doing things, and if you can tell me why the DTC is getting involved, and if possible, what I can do to avoid it, I'd be grateful.) I have code running on an ASP.Net webserver. We have one database, SQL 2008. Our data access code looks something like this-- We have a data access layer that uses a wrapper object for SQLConnections and SQLCommands. Typical use looks like this: void method1() { objDataObject = new DataAccessLayer(); objDataObject.Connection = SomeConnectionMethod(); SqlCommand objCommand = DataAccessUtils.CreateCommand(SomeStoredProc); //create some SqlParameters, add them to the objCommand, etc.... objDataObject.BeginTransaction(IsolationLevel.ReadCommitted); objDataObject.ExecuteNonQuery(objCommand); objDataObject.CommitTransaction(); objDataObject.CloseConnection(); } So indeed, a very thin wrapper around SqlClient, SqlConnection etc. I want to run several stored procs in a transaction, and the wrapper class doesn't allow me access to the SqlTransaction so I can't pass it from one component to the next. This led me to use a TransactionScope: using (TransactionScope tx1 = new TransactionScope(TransactionScope.RequiresNew)) { method1(); method2(); method3(); tx1.Complete(); } When I do this, the DTC gets involved, and unfortunately our webservers don't have "allow remote clients" enabled in the MSDTC settings-- getting IT to allow that will be a fight. I'd love to avoid DTC becoming involved but can I do it? Can I leave out the transactional calls in methods1-3() and just let the TransactionScope figure it all out?

    Read the article

  • Hashtable resizing leaks memory

    - by thpetrus
    I wrote a hashtable and it basically consists of these two structures: typedef struct dictEntry { void *key; void *value; struct dictEntry *next; } dictEntry; typedef struct dict { dictEntry **table; unsigned long size; unsigned long items; } dict; dict.table is a multidimensional array, which contains all the stored key/value pair, which again are a linked list. If half of the hashtable is full, I expand it by doubling the size and rehashing it: dict *_dictRehash(dict *d) { int i; dict *_d; dictEntry *dit; _d = dictCreate(d->size * 2); for (i = 0; i < d->size; i++) { for (dit = d->table[i]; dit != NULL; dit = dit->next) { _dictAddRaw(_d, dit); } } /* FIXME memory leak because the old dict can never be freed */ free(d); // seg fault return _d; } The function above uses the pointers from the old hash table and stores it in the newly created one. When freeing the old dict d a Segmentation Fault occurs. How am I able to free the old hashtable struct without having to allocate the memory for the key/value pairs again?

    Read the article

< Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >