Search Results

Search found 1977 results on 80 pages for 'concurrent modification'.

Page 64/80 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • PreparedStatement question in Java against Oracle.

    - by fardon57
    Hi everyone, I'm working on the modification of some code to use preparedStatement instead of normal Statement, for security and performance reason. Our application is currently storing information into an embedded derby database, but we are going to move soon to Oracle. I've found two things that I need your help guys about Oracle and Prepared Statement : 1- I've found this document saying that Oracle doesn't handle bind parameters into IN clauses, so we cannot supply a query like : Select pokemon from pokemonTable where capacity in (?,?,?,?) Is that true ? Is there any workaround ? ... Why ? 2- We have some fields which are of type TIMESTAMP. So with our actual Statement, the query looks like this : Select raichu from pokemonTable where evolution = TO_TIMESTAMP('2500-12-31 00:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') What should be done for a prepared Statement ? Should I put into the array of parameters : 2500-12-31 or TO_TIMESTAMP('2500-12-31 00:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') ? Thanks for your help, I hope my questions are clear ! Regards,

    Read the article

  • How to update GUI thread/class from worker thread/class?

    - by user315182
    First question here so hello everyone. The requirement I'm working on is a small test application that communicates with an external device over a serial port. The communication can take a long time, and the device can return all sorts of errors. The device is nicely abstracted in its own class that the GUI thread starts to run in its own thread and has the usual open/close/read data/write data basic functions. The GUI is also pretty simple - choose COM port, open, close, show data read or errors from device, allow modification and write back etc. The question is simply how to update the GUI from the device class? There are several distinct types of data the device deals with so I need a relatively generic bridge between the GUI form/thread class and the working device class/thread. In the GUI to device direction everything works fine with [Begin]Invoke calls for open/close/read/write etc. on various GUI generated events. I've read the thread here (How to update GUI from another thread in C#?) where the assumption is made that the GUI and worker thread are in the same class. Google searches throw up how to create a delegate or how to create the classic background worker but that's not at all what I need, although they may be part of the solution. So, is there a simple but generic structure that can be used? My level of C# is moderate and I've been programming all my working life, given a clue I'll figure it out (and post back)... Thanks in advance for any help.

    Read the article

  • Google Maps API and "rightclick" events on Macs

    - by samc
    Using the Google Maps API (v3), I can create a map and handle normal click events just fine, but when I want to handle rightclick events, it doesn't work on Macs. I assume this is because a rightclick on a Mac is actually converted to a ctrl-click, but the Google Maps API MouseEvent doesn't provide information about modifier keys, so I can't check for the ctrl key. I tried adding an "capture" event listener to the document that converts the click event to a rightclick event. function convertClick(e) { if (e.ctrlKey) { e.button = 2; } } document.addEventListener("click", convertClick, true) I added an alert to verify that the condition is correct, but modifying the event in this way didn't work. So, I decided to have my event handler set a global flag that my click handler could check. If the flag is set, it means ctrl was pressed, so the click handler just invokes the rightclick handler. var ctrl; function captureCtrl(e) { ctrl = e.ctrlKey; } This approach worked great, except for one thing. The ctrl flag gets set for the click after the one that occured when ctrl was pressed. That means the event handler is be called during the bubble phase rather than the capture phase. Could explain why the event modification approach didn't work. So, my question is how can you detect "rightclick" events from Macs with the Google Maps API? I can't be the first person to want to do this. That said, when I right-click on the map on http://maps.google.com from a Windows or Linux machine, I get a popup box with options like "Directions from here...", etc. On a Mac, nothing happens. So, not even the main Google Maps page has solved this problem. ...maybe I am the first person to want to do this.

    Read the article

  • __toString magic and type coercion

    - by TomcatExodus
    I've created a Template class for managing views and their associated data. It implements Iterator and ArrayAccess, and permits "sub-templates" for easy usage like so: <p><?php echo $template['foo']; ?></p> <?php foreach($template->post as $post): ?> <p><?php echo $post['bar']; ?></p> <?php endforeach; ?> Anyways, rather than using inline core functions, such as hash() or date(), I figured it would be useful to create a class called TemplateData, which would act as a wrapper for any data stored in the templates. This way, I can add a list of common methods for formatting, for example: echo $template['foo']->asCase('upper'); echo $template['bar']->asDate('H:i:s'); //etc.. When a value is set via $template['foo'] = 'bar'; in the controllers, the value of 'bar' is stored in it's own TemplateData object. I've used the magic __toString() so when you echo a TemplateData object, it casts to (string) and dumps it's value. However, despite the mantra controllers and views should not modify data, whenever I do something like this: $template['foo'] = 1; echo $template['foo'] + 1; //exception It dies on a Object of class TemplateData could not be converted to int; Unless I recast $template['foo'] to a string: echo ((string) $template['foo']) + 1; //outputs 2 Sort of defeats the purpose having to jump through that hoop. Are there any workarounds for this sort of behavior that exist, or should I just take this as it is, an incidental prevention of data modification in views?

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Script to install and compile Python, Django, Virtualenv, Mercurial, Git, LessCSS, etc... on Dreamho

    - by tmslnz
    The Story After cleaning up my Dreamhost shared server's home folder from all the cruft accumulated over time, I decided to start afresh and compile/reinstall Python. All tutorials and snippets I found seemed overly simplistic, assuming (or ignoring) a bunch of dependencies needed by Python to compile all modules correctly. So, starting from http://andrew.io/weblog/2010/02/installing-python-2-6-virtualenv-and-virtualenvwrapper-on-dreamhost/ (so far the best guide I found), I decided to write a set-and-forget Bash script to automate this painful process, including along the way a bunch of other things I am planning to use. The Script I am hosting the script on http://bitbucket.org/tmslnz/python-dreamhost-batch/src/ The TODOs So far it runs fine, and does all it needs to do in about 900 seconds, giving me at the end of the process a fully functional Python / Mercurial / etc... setup without even needing to log out and back in. I though this might be of use for others too, but there are a few things that I think it's missing and I am not quite sure how to go for it, what's the best way to do it, or if this just doesn't make any sense at all. Check for errors and break Check for minor version bumps of the packages and give warnings Check for known dependencies Use arguments to install only some of the packages instead of commenting out lines Organise the code in a manner that's easy to update Optionally make the installers and compiling silent, with error logging to file failproof .bashrc modification to prevent breaking ssh logins and having to log back via FTP to fix it EDIT: The implied question is: can anyone, more bashful than me, offer general advice on the worthiness of the above points or highlight any problems they see with this approach? (see my answer to Ry4an's comment below) The Gist I am no UNIX or Bash or compiler expert, and this has been built iteratively, by trial and error. It is somehow going towards apt-get (well, 1% of it...), but since Dreamhost and others obviously cannot give root access on shared servers, this looks to me like a potentially very useful workaround; particularly so with some community work involved.

    Read the article

  • Node & Redis: Crucial Design Issues in Production Mode

    - by Ali
    This question is a hybrid one, being both technical and system design related. I'm developing the backend of an application that will handle approx. 4K request per second. We are using Node.js being super fast and in terms of our database struction we are using MongoDB, with Redis being a layer between Node and MongoDB handling volatile operations. I'm quite stressed because we are expecting concurrent requests that we need to handle carefully and we are quite close to launch. However I do not believe I've applied the correct approach on redis. I have a class Student, and they constantly change stages(such as 'active', 'doing homework','in lesson' etc. Thus I created a Redis DB for each state. (1 for being 'active', 2 for being 'doing homework'). Above I have the structure of the 'active' students table; xa5p - JSON stringified object #1 pQrW - JSON stringified object #2 active_student_table - {{studentId:'xa5p'}, {studentId:'pQrW'}} Since there is no 'select all keys method' in Redis, I've been suggested to use a set such that when I run command 'smembers' I receive the keys and later on do 'get' for each id in order to find a specific user (lets say that age older than 15). I've been also suggested that in fact I used never use keys in production mode. My question is, no matter how 'conceptual' it is, what specific things I should avoid doing in Node & Redis in production stage?. Are they any issues related to my design? Students must be objects and I sure can list them in a list but I haven't done yet. Is it that crucial in production stage?

    Read the article

  • Unit Testing Private Method in Resource Managing Class (C++)

    - by BillyONeal
    I previously asked this question under another name but deleted it because I didn't explain it very well. Let's say I have a class which manages a file. Let's say that this class treats the file as having a specific file format, and contains methods to perform operations on this file: class Foo { std::wstring fileName_; public: Foo(const std::wstring& fileName) : fileName_(fileName) { //Construct a Foo here. }; int getChecksum() { //Open the file and read some part of it //Long method to figure out what checksum it is. //Return the checksum. } }; Let's say I'd like to be able to unit test the part of this class that calculates the checksum. Unit testing the parts of the class that load in the file and such is impractical, because to test every part of the getChecksum() method I might need to construct 40 or 50 files! Now lets say I'd like to reuse the checksum method elsewhere in the class. I extract the method so that it now looks like this: class Foo { std::wstring fileName_; static int calculateChecksum(const std::vector<unsigned char> &fileBytes) { //Long method to figure out what checksum it is. } public: Foo(const std::wstring& fileName) : fileName_(fileName) { //Construct a Foo here. }; int getChecksum() { //Open the file and read some part of it return calculateChecksum( something ); } void modifyThisFileSomehow() { //Perform modification int newChecksum = calculateChecksum( something ); //Apply the newChecksum to the file } }; Now I'd like to unit test the calculateChecksum() method because it's easy to test and complicated, and I don't care about unit testing getChecksum() because it's simple and very difficult to test. But I can't test calculateChecksum() directly because it is private. Does anyone know of a solution to this problem?

    Read the article

  • jboss cache as hibernate 2nd level - cluster node doesn't persist replicated data

    - by Sergey Grashchenko
    I'm trying to build an architecture basically described in user guide http://www.jboss.org/file-access/default/members/jbosscache/freezone/docs/3.2.1.GA/userguide_en/html/cache_loaders.html#d0e3090 (Replicated caches with each cache having its own store.) but having jboss cache configured as hibernate second level cache. I've read manual for several days and played with the settings but could not achieve the result - the data in memory (jboss cache) gets replicated across the hosts, but it's not persisted in the datasource/database of the target (not original) cluster host. I had a hope that a node might become persistent at eviction, so I've got a cache listener and attached it to @NoveEvicted event. I found that though I could adjust eviction policy to fully control it, no any persistence takes place. Then I had a though that I could try to modify CacheLoader to set "passivate" to true, but I found that in my case (hibernate 2nd level cache) I don't have a way to access a loader. I wonder if replicated data persistence is possible at all by configuration tuning ? If not, will it work for me to create some manual peristence in CacheListener (I could check whether the eviction event is local, and if not - persist it to hibernate datasource somehow) ? I've used mvcc-entity configuration with the modification of cacheMode - set to REPL_ASYNC. I've also played with the eviction policy configuration. Last thing to mention is that I've tested entty persistence and replication in project that has been generated with Seam. I guess it's not important though.

    Read the article

  • Using shared_ptr to implement RCU (read-copy-update)?

    - by yongsun
    I'm very interested in the user-space RCU (read-copy-update), and trying to simulate one via tr1::shared_ptr, here is the code, while I'm really a newbie in concurrent programming, would some experts help me to review? The basic idea is, reader calls get_reading_copy() to gain the pointer of current protected data (let's say it's generation one, or G1). writer calls get_updating_copy() to gain a copy of the G1 (let's say it's G2), and only one writer is allowed to enter the critical section. After the updating is done, writer calls update() to do a swap, and make the m_data_ptr pointing to data G2. The ongoing readers and the writer now hold the shared_ptr of G1, and either a reader or a writer will eventually deallocate the G1 data. Any new readers would get the pointer to G2, and a new writer would get the copy of G2 (let's say G3). It's possible the G1 is not released yet, so multiple generations of data my co-exists. template <typename T> class rcu_protected { public: typedef T type; typedef std::tr1::shared_ptr<type> rcu_pointer; rcu_protected() : m_data_ptr (new type()) {} rcu_pointer get_reading_copy () { spin_until_eq (m_is_swapping, 0); return m_data_ptr; } rcu_pointer get_updating_copy () { spin_until_eq (m_is_swapping, 0); while (!CAS (m_is_writing, 0, 1)) {/* do sleep for back-off when exceeding maximum retry times */} rcu_pointer new_data_ptr(new type(*m_data_ptr)); // as spin_until_eq does not have memory barrier protection, // we need to place a read barrier to protect the loading of // new_data_ptr not to be re-ordered before its construction _ReadBarrier(); return new_data_ptr; } void update (rcu_pointer new_data_ptr) { while (!CAS (m_is_swapping, 0, 1)) {} m_data_ptr.swap (new_data_ptr); // as spin_until_eq does not have memory barrier protection, // we need to place a write barrier to protect the assignments of // m_is_writing/m_is_swapping be re-ordered bofore the swapping _WriteBarrier(); m_is_writing = 0; m_is_swapping = 0; } private: volatile long m_is_writing; volatile long m_is_swapping; rcu_pointer m_data_ptr; };

    Read the article

  • What can i use to journal writes to file system

    - by Dmitry
    Hello, all I need to track all writes to files in order to have synchronized version of files on different place (server or just other directory, not considerable). Let it: all files located in same directory feel free to create some system files (e.g. SomeFileName.Ext~temp-data) no one have concurrent access to synced directory; nobody spoil ours meta-files or change real-files before we do postponed writes (like a commits) do not to care recovering "local" changes in case of crash; system can just rolled back to state of "server" by simple copy from it significant to have it transparent to use (so programmer must just call ordinary fopen(), read(), write()) It must be guaranteed that copy of files which "server" have is consistent. That is whole files scope existed in some moment of time. They may be sufficiently outdated but it must be fair snapshot of all files at some time. As i understand i should overload writing logic to collect data in order sent changes to "server". For example writing to temporary File~tmp. And so i have to overload reads in order program could read actual data of file. It would be great if you suggest some existing library (java or c++, it is unimportant) or solution (VCS customizing?). Or give hints how should i write it by myself. edit: After some reading i have more precision requirements: I need COW (Copy-on-write) wrapper for fopen(),fwrite(),.. or interceptor (hook) WriteFile() and other FS api system calls. Log-structured file system in userspace would be a alternative too.

    Read the article

  • Socket Read In Multi-Threaded Application Returns Zero Bytes or EINTR (-1)

    - by user309670
    Hi. Am a c-coder for a while now - neither a newbie nor an expert. Now, I have a certain daemoned application in C on a PPC Linux. I use PHP's socket_connect as a client to connect to this service locally. The server uses epoll for concurrent connections via a Unix socket. A user submitted string is parsed for certain characters/words using strstr() and if found, spawns 4 joinable threads to different websites simultaneously. I use socket, connect, write and read, to interact with the said webservers via TCP on port 80 in each thread. All connections and writes seems successful. Reads to the webserver sockets fail however, with either (A) all 3 threads seem to hang, and only one thread returns -1 and errno is set to 104. The responding thread takes like 10 minutes - an eternity long:-(. *I read somewhere that the 104 (is EINTR) suggests that ...'the connection was reset by peer', or (B) 0 bytes from 3 threads, and only 1 of the 4 threads actually returns some data. Isn't the socket read/write thread-safe? Otherwise, use thread-safe (and reentrant) libc functions such as strtok_r, gethostbyname_r, etc. *I doubt that the said webhosts are actually resetting the connection, because when I run a single-threaded standalone (everything else equal) all things works perfectly right. There's a second problem too (oops), I can't write back to the client who connect to my epoll-ed Unix socket. My daemon application will hang and hog CPU 100% for ever. Yet nothing is written to the clients end. Am sure the client (a very typical PHP socket application) hasn't closed the connection whenever this is happening - no error(s) detected either. I cannot figure-out whatever is wrong even with Valgrind or GDB

    Read the article

  • Is it undefined behavior in the case of the private functions call in the initializer list?

    - by Alexey Malistov
    Consider the following code: struct Calc { Calc(const Arg1 & arg1, const Arg2 & arg2, /* */ const ArgN & argn) : arg1(arg1), arg2(arg2), /* */ argn(argn), coef1(get_coef1()), coef2(get_coef2()) { } int Calc1(); int Calc2(); int Calc3(); private: const Arg1 & arg1; const Arg2 & arg2; // ... const ArgN & argn; const int coef1; // I want to use const because const int coef2; // no modification is needed. int get_coef1() const { // calc coef1 using arg1, arg2, ..., argn; // undefined behavior? } int get_coef2() const { // calc coef2 using arg1, arg2, ..., argn and coef1; // undefined behavior? } }; struct Calc is not completely defined when I call get_coef1 and get_coef2 Is this code valid? Can I get UB?

    Read the article

  • Parallel.Foreach loop creating multiple db connections throws connection errors?

    - by shawn.mek
    Login failed. The login is from an untrusted domain and cannot be used with Windows authentication I wanted to get my code running in parallel, so I changed my foreach loop to a parallel foreach loop. It seemed simple enough. Each loop connects to the database, looks up some stuff, performs some logic, adds some stuff, closes the connection. But I get the above error? I'm using my local sql server and entity framework (each loop uses it's own context). Is there some problem with connecting multiple times using the same local login or something? How did I get around this? I have (before trying to covert to a parallel.foreach loop) split my list of objects that I am foreach looping through into four groups (separate csv files) and run four concurrent instances of my program (which ran faster overall than just one, thus the idea for parallel). So it seems connecting to the db shouldn't be a problem? Any ideas? EDIT: Here's before var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); ForEach(cloneIdAndAccessions in allAccessionsFromObs) DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); after var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); Parallel.ForEach(allAccessionsFromObs, cloneIdAndAccessions => DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); Inside the DoWork I use the BioEntities using (var bioEntities = new BioEntities(connectionString)) {...}

    Read the article

  • Help a CRUD programmer think about an "approval workflow"

    - by gerdemb
    I've been working on a web application that is basically a CRUD application (Create, Read, Update, Delete). Recently, I've started working on what I'm calling an "approval workflow". Basically, a request is generated for a material and then sent for approval to a manager. Depending on what is requested, different people need to approve the request or perhaps send it back to the requester for modification. The approvers need to keep track of what to approve what has been approved and the requesters need to see the status of their requests. As a "CRUD" developer, I'm having a hard-time wrapping my head around how to design this. What database tables should I have? How do I keep track of the state of the request? How should I notify users of actions that have happened to their requests? Is their a design pattern that could help me with this? Should I be drawing state-machines in my code? I think this is a generic programing question, but if it makes any difference I'm using Django with MySQL.

    Read the article

  • identify accounts using zend-openid [HELP]

    - by tawfekov
    Hi! , i had successfully implemented simple open ID authentication with the [gmail , yahoo , myopenid, aol and many others ] exactly as stackoverflow do but i had a problem with identifying accounts for example 'openid_ns' => string 'http://specs.openid.net/auth/2.0' (length=32) 'openid_mode' => string 'id_res' (length=6) 'openid_op_endpoint' => string 'https://www.google.com/accounts/o8/ud' (length=37) 'openid_response_nonce' => string '2010-05-02T06:35:40Z9YIASispxMvKpw' (length=34) 'openid_return_to' => string 'http://dev.local/download/index/gettingback' (length=43) 'openid_assoc_handle' => string 'AOQobUf_2jrRKQ4YZoqBvq5-O9sfkkng9UFxoE9SEBlgWqJv7Sq_FYgD7rEXiLLXhnjCghvf' (length=72) 'openid_signed' => string 'op_endpoint,claimed_id,identity,return_to,response_nonce,assoc_handle' (length=69) 'openid_sig' => string 'knieEHpvOc/Rxu1bkUdjeE9YbbFX3lqZDOQFxLFdau0=' (length=44) 'openid_identity' => string 'https://www.google.com/accounts/o8/id?id=AItOawk0NAa51T3xWy-oympGpxFj5CftFsmjZpY' (length=80) 'openid_claimed_id' => string 'https://www.google.com/accounts/o8/id?id=AItOawk0NAa51T3xWy-oympGpxFj5CftFsmjZpY' (length=80) i thought that openid_claimed_id or openid_identity is identical for every account but after some testing turns out the claimed id is changeable by the users them self in yahoo and randomly changing in case of gmail FYI am using zend openid with a bit modification on consumer class and i can't use SERG [Simple Refistration Extension] cuz currently it uses openid v1 specification :( i hadn't read the specification of openid so if any body know please help me to how to identify openid accounts to know if this account has signed in before or not

    Read the article

  • How to compare two file structures in PHP?

    - by OM The Eternity
    I have a function which gives me the complete file structure upto n-level, function getDirectory($path = '.', $ignore = '') { $dirTree = array (); $dirTreeTemp = array (); $ignore[] = '.'; $ignore[] = '..'; $dh = @opendir($path); while (false !== ($file = readdir($dh))) { if (!in_array($file, $ignore)) { if (!is_dir("$path/$file")) { //display of file and directory name with their modification time $stat = stat("$path/$file"); $statdir = stat("$path"); $dirTree["$path"][] = $file. " === ". date('Y-m-d H:i:s', $stat['mtime']) . " Directory == ".$path."===". date('Y-m-d H:i:s', $statdir['mtime']) ; } else { $dirTreeTemp = getDirectory("$path/$file", $ignore); if (is_array($dirTreeTemp))$dirTree = array_merge($dirTree, $dirTreeTemp); } } } closedir($dh); return $dirTree; } $ignore = array('.htaccess', 'error_log', 'cgi-bin', 'php.ini', '.ftpquota'); //function call $dirTree = getDirectory('.', $ignore); //file structure array print print_r($dirTree); Now here my requirement is , I have two sites The Development/Test Site- where i do testing of all the changes The Production Site- where I finally post all the changes as per test in development site Now, for example, I have tested an image upload in the Development/test site, and i found it appropriate to publish on Production site then i will completely transfer the Development/Test DB detail to Production DB, but now I want to compare the files structure as well to transfer the corresponding image file to Production folder. There could be the situation when I update the image by editing the image and upload it with same name, now in this case the image file would be already present there, which will restrict the use of "file_exist" logic, so for these type of situations....HOW CAN I COMPARE THE TWO FILE STRUCTURE TO GET THE SYNCHRONIZATION DONE AS PER REQUIREMENT??

    Read the article

  • Is there any performance issue using Row_Number to implement table paging in Sql Server 2008?

    - by majkinetor
    I want to implement table paging using this method: SET @PageNum = 2; SET @PageSize = 10; WITH OrdersRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY OrderDate, OrderID) AS RowNum ,* FROM dbo.Orders ) SELECT * FROM OrdersRN WHERE RowNum BETWEEN (@PageNum - 1) * @PageSize + 1 AND @PageNum * @PageSize ORDER BY OrderDate ,OrderID; Is there anything I should be aware of ? Table has millions of records. Thx. EDIT: After using suggested MAXROWS method for some time (which works really really fast) I had to switch back to ROW_NUMBER method because of its greater flexibility. I am also very happy about its speed so far (I am working with View having more then 1M records with 10 columns). To use any kind of query I use following modification: PROCEDURE [dbo].[PageSelect] ( @Sql nvarchar(512), @OrderBy nvarchar(128) = 'Id', @PageNum int = 1, @PageSize int = 0 ) AS BEGIN SET NOCOUNT ON Declare @tsql as nvarchar(1024) Declare @i int, @j int if (@PageSize <= 0) OR (@PageSize > 10000) SET @PageSize = 10000 -- never return more then 10K records SET @i = (@PageNum - 1) * @PageSize + 1 SET @j = @PageNum * @PageSize SET @tsql = 'WITH MyTableOrViewRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY ' + @OrderBy + ') AS RowNum ,* FROM MyTableOrView WHERE ' + @Sql + ' ) SELECT * FROM MyTableOrViewRN WHERE RowNum BETWEEN ' + CAST(@i as varchar) + ' AND ' + cast(@j as varchar) exec(@tsql) END If you use this procedure make sure u prevented sql injection.

    Read the article

  • how to make connection pool in spring application using BasicDataSource.

    - by vipin
    hi friend, I have created the application in which I need to configure the connection pool.In which I am configuring the connection pooling in the spring_Config file. using the Basicdatasource. but there is some problem to create the connection pool. Please tell me how to create the connection pooling in spring application using BasicDatasource. I tried this one code in spring config ;- bean id="datasource" class="org.apache.commons.dbcp.BasicDataSource" com.mysql.jdbc.Driver jdbc:mysql://192.168.1.12:3306/revup?noAccessToProcedureBodies=true jdbc:mysql://localhost:3306/revup?noAccessToProcedureBodies=true-- revuser root-- kjacob gme997FK-- <property name="poolPreparedStatements"> <value>true</value> </property> <property name="initialSize"> <value>2</value> </property> <property name="maxActive"> <value>15</value> </property> Is there any modification of code please tell me. thanks in advance.

    Read the article

  • Java: design for using many executors services and only few threads

    - by Guillaume
    I need to run in parallel multiple threads to perform some tests. My 'test engine' will have n tests to perform, each one doing k sub-tests. Each test result is stored for a later usage. So I have n*k processes that can be ran concurrently. I'm trying to figure how to use the java concurrent tools efficiently. Right now I have an executor service at test level and n executor service at sub test level. I create my list of Callables for the test level. Each test callable will then create another list of callables for the subtest level. When invoked a test callable will subsequently invoke all subtest callables test 1 subtest a1 subtest ...1 subtest k1 test n subtest a2 subtest ...2 subtest k2 call sequence: test manager create test 1 callable test1 callable create subtest a1 to k1 testn callable create subtest an to kn test manager invoke all test callables test1 callable invoke all subtest a1 to k1 testn callable invoke all subtest an to kn This is working fine, but I have a lot of new treads that are created. I can not share executor service since I need to call 'shutdown' on the executors. My idea to fix this problem is to provide the same fixed size thread pool to each executor service. Do you think it is a good design ? Do I miss something more appropriate/simple for doing this ?

    Read the article

  • PHP/MySQL time zone migration

    - by El Yobo
    I have an application that currently stores timestamps in MySQL DATETIME and TIMESTAMP values. However, the application needs to be able to accept data from users in multiple time zones and show the timestamps in the time zone of other users. As such, this is how I plan to amend the application; I would appreciate any suggestions to improve the approach. Database modifications All TIMESTAMPs will be converted to DATETIME values; this is to ensure consistency in approach and to avoid having MySQL try to do clever things and convert time zones (I want to keep the conversion in PHP, as it involves less modification to the application, and will be more portable when I eventually manage to escape from MySQL). All DATETIME values will be adjusted to convert them to UTC time (currently all in Australian EST) Query modifications All usage of NOW() to be replaced with UTC_TIMESTAMP() in queries, triggers, functions, etc. Application modifications The application must store the time zone and preferred date format (e.g. US vs the rest of the world) All timestamps will be converted according to the user settings before being displayed All input timestamps will be converted to UTC according to the user settings before being input Additional notes Converting formats will be done at the application level for several main reasons The approach to converting time zones varies from DB to DB, so handing it there will be non-portable (and I really hope to be migrating away from MySQL some time in the not-to-distant future). MySQL TIMESTAMPs have limited ranges to the permitted dates (~1970 to ~2038) MySQL TIMESTAMPs have other undesirable attributes, including bizarre auto-update behaviour (if not carefully disabled) and sensitivity to the server zone settings (and I suspect I might screw these up when I migrate to Amazon later in the year). Is there anything that I'm missing here, or does anyone have better suggestions for the approach?

    Read the article

  • Using SMO to call Database.ExecuteNonQuery() concurrently?

    - by JimDaniel
    I have been banging my head against the wall trying to figure out how I can run update scripts concurrently against multiple databases in a single SQL Server instance using SMO. Our environments have an ever-increasing number of databases which need updating, and iterating through one at a time is becoming a problem (too slow). From what I understand SMO does not support concurrent operations, and my tests have bore that out. There seems to be shared memory at the Server object level, for things like DataReader context, keeps throwing exceptions such as "reader is already open." I apologize for not having the exact exceptions I am getting. I will try to get them and update this post. I am no expert on SMO and just feeling my way through to be honest. Not really sure I am approaching it the right way, but it's something that has to be done, or our productivity will slow to a crawl. So how would you guys do something like this? Am I using the wrong technology with SMO? All I am wanting to do is execute sql scripts against databases in a single sql server instance in parallel. Thanks for any help you can give, Daniel

    Read the article

  • Sql Compact and __sysobjects

    - by Scott Wisniewski
    I have some SQL Compact queries that create tables inside of transaction. This is mainly because I need to simulate temporary tables, which SQL Compact does not support. I do this by creating a real table, and then dropping it at the end of the transaction. This mostly works. Sometimes, however, when creating the tables Sql Compact will try to acquire PAGE level locks on the __sysobjects table. If there are several concurrent queries running that create "temp" tables, the attempt to acquire a page lock can result in a dead lock followed by a SqlLockTimeout exception. For normal tables I could fix this using a "with (rowlock)" hint. However, because I'm not writing the query to insert into __sysobjets (SQL server does that in response to "create table") I can't do this. Does anyone know of a way I could get around this? I've thought about pulling the table creation out of the transaction, but that opens up the possibility of phantom temporary tables that I'd then need to clean up regularly. Ideally I'd like to avoid that if possible.

    Read the article

  • Preventing ActiveRecord save() on an instance

    - by Craig Walker
    I have an ActiveRecord model object Foo; it represents a standard database row. I want to be able to display modified versions of instances of this object. I'd like to reuse the class itself, as it already has all the hooks & aspects I'll need. (For example: I already have a view that displays the appropriate attributes). Basically I want to clone the model instance, modify some of its properties, and feed it back to the caller (view, test, etc). I do not want these attribute modifications getting back into the database. However, I do want to include the id attribute in the cloned version, as it makes dealing with the route-helpers much easier. Thus, I plan on calling ActiveRecord::Base.clone(), manually setting the ID of the cloned instance, and then making the appropriate attribute changes to the new instance. This has me worried though; one save() on the modified instance and my original data will get clobbered. So, I'm looking to lock down the new instance so that it won't hurt anything else. I'm already planning on calling freeze() (on the understanding that this prevents further modification to the object, though the documentation isn't terribly clear). However, I don't see any obvious way to prevent a save(). What would be the best approach to achieving this?

    Read the article

  • Keeping cached browser data inside ASP update panel textboxes/dropdowns for browser back click

    - by pmlevere
    I'm new in VB.net/asp and am running a VB web application in a visual database program called IronSpeed designer. I'm primarily using IronSpeed in this case for its login/role security features. I have a basic two page setup for this app. The user logs in then is taken to AccountEntry.aspx, they enter data into textboxes and select some dropdown values that are linked to a sql database, then they click "submit" to move to Results.aspx. On Results.aspx, the user can change data and then generate several types of reports (PDF, Excel, etc). I'm used to setting up ASP controls inside ASPContent areas, and in these areas if a user performs a browser back click the previously entered data will still be on the page for potential user modification. However in this web app, IronSpeed is setting up the page and asp controls inside an asp update panel. It appears inside an asp update panel, cached values can't be seen on a browser back click. In this case, it's important that the orginally entered values still be there for the user experience if the user advances to Results.aspx then clicks browser back to modify a value on AccountEntry.aspx. If I have to I'll setup Session Variables and disable browser clicking, but that is last resort. Is there any way to save cached data inside an asp update panel and have it there for a browser back click?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >