Search Results

Search found 6311 results on 253 pages for 'limit clause'.

Page 181/253 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • How can this be done with (N)Hibernate?

    - by Vilx-
    I'm creating a windows forms application with NHibernate. It's an MDI application, so there is no limit to how many forms the user can have open at the same time (probably many). For most forms I want to have an "OK" and a "Cancel" button. Both close the form, but "OK" also saves the modified data to the DB. The forms can be pretty complex, and the modifications are likely to touch a whole graph of objects, adding some, deleting some, and changing some more. It would be good if the changes could be automatically detected and persisted as needed, without the need to manually keep track of each of them. What would be a good way to do this? Extra information: I can make whatever DB schema I want. I'm using MSSQL 2008 and currently have decided for GUID primary keys (with guid.comb generator) and a TIMESTAMP column for optimistic concurrency. I tried to simply set FlushMode of a NHibernate ISession to Never, doing all modifications as needed, and then calling Flush() if the user clicked OK. But that didn't work.

    Read the article

  • How do I strip multiple (optional) parts of a SQL string using .NET Regular Expressions?

    - by Luc
    I've been working on this for a few hours now and can't find any help on it. Basically, I'm trying to strip a SQL string into various parts (fields, from, where, having, groupBy, orderBy). I refuse to believe that I'm the first person to ever try to do this, so I'd like to ask for some advise from the StackOverflow community. :) To understand what I need, assume the following SQL string: select * from table1 inner join table2 on table1.id = table2.id where field1 = 'sam' having table1.field3 > 0 group by table1.field4 order by table1.field5 I created a regular expression to group the parts accordingly: select\s+(?<fields>.+)\s+from\s+(?<from>.+)\s+where\s+(?<where>.+)\s+having\s+(?<having>.+)\s+group\sby\s+(?<groupby>.+)\s+order\sby\s+(?<orderby>.+) This gives me the following results: fields => * from => table1 inner join table2 on table1.id = table2.id where => field1 = 'sam' having => table1.field3 > 0 groupby => table1.field4 orderby => table1.field5 The problem that I'm faced with is that if any part of the SQL string is missing after the 'from' clause, the regular expression doesn't match. To fix that, I've tried putting each optional part in it's own (...)? group but that doesn't work. It simply put all the optional parts (where, having, groupBy, and orderBy) into the 'from' group. Any ideas?

    Read the article

  • Suggestion on Database structure for relational data

    - by miccet
    Hi there. I've been wrestling with this problem for quite a while now and the automatic mails with 'Slow Query' warnings are still popping in. Basically, I have Blogs with a corresponding table as well as a table that keeps track of how many times each Blog has been viewed. This last table has a huge amount of records since this page is relatively high traffic and it logs every hit as an individual row. I have tried with indexes on the fields that are included in the WHERE clause, but it doesn't seem to help. I have also tried to clean the table each week by removing old ( 1.weeks) records. SO, I'm asking you guys, how would you solve this? The query that I know is causing the slowness is generated by Rails and looks like this: SELECT count(*) AS count_all FROM blog_views WHERE (created_at >= '2010-01-01 00:00:01' AND blog_id = 1); The tables have the following structures: CREATE TABLE IF NOT EXISTS 'blogs' ( 'id' int(11) NOT NULL auto_increment, 'name' varchar(255) default NULL, 'perma_name' varchar(255) default NULL, 'author_id' int(11) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, 'blog_picture_id' int(11) default NULL, 'blog_picture2_id' int(11) default NULL, 'page_id' int(11) default NULL, 'blog_picture3_id' int(11) default NULL, 'active' tinyint(1) default '1', PRIMARY KEY ('id'), KEY 'index_blogs_on_author_id' ('author_id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; And CREATE TABLE IF NOT EXISTS 'blog_views' ( 'id' int(11) NOT NULL auto_increment, 'blog_id' int(11) default NULL, 'ip' varchar(255) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, PRIMARY KEY ('id'), KEY 'index_blog_views_on_blog_id' ('blog_id'), KEY 'created_at' ('created_at') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;

    Read the article

  • When will a TCP network packet be fragmented at the application layer?

    - by zooropa
    When will a TCP packet be fragmented at the application layer? When a TCP packet is sent from an application, will the recipient at the application layer ever receive the packet in two or more packets? If so, what conditions cause the packet to be divided. It seems like a packet won't be fragmented until it reaches the Ethernet (at the network layer) limit of 1500 bytes. But, that fragmentation will be transparent to the recipient at the application layer since the network layer will reassemble the fragments before sending the packet up to the next layer, right?

    Read the article

  • I need to simplify a MySQL sub query for performance - please help

    - by Richard
    I have the following query which is takin 3 seconds on a table of 1500 rows, does someone know how to simplify it? SELECT dealers.name, dealers.companyName, dealer_accounts.balance FROM dealers INNER JOIN dealer_accounts ON dealers.id = dealer_accounts.dealer_id WHERE dealer_accounts.id = ( SELECT id FROM dealer_accounts WHERE dealer_accounts.dealer_id = dealers.id AND dealer_accounts.date < '2010-03-30' ORDER BY dealer_accounts.date DESC, dealer_accounts.id DESC LIMIT 1 ) ORDER BY dealers.name I need the latest dealer_accounts record for each dealer by a certain date with the join on the dealer_id field on the dealer_accounts table. This really should be simple, I don't know why I am struggling to find something.

    Read the article

  • PHP class_exists always returns true

    - by Ali
    I have a PHP class that needs some pre-defined globals before the file is included: File: includes/Product.inc.php if (class_exists('Product')) { return; } // This class requires some predefined globals if ( !isset($gLogger) || !isset($db) || !isset($glob) ) { return; } class Product { ... } The above is included in other PHP files that need to use Product using require_once. Anyone who wants to use Product must however ensure those globals are available, at least that's the idea. I recently debugged an issue in a function within the Product class which was caused because $gLogger was null. The code requiring the above Product.inc.php had not bothered to create the $gLogger. So The question is how was this class ever included if $gLogger was null? I tried to debug the code (xdebug in NetBeans), put a breakpoint at the start of Product.inc.php to find out and every time it came to the if (class_exists('Product')) clause it would simply step in and return thus never getting to the global checks. So how was it ever included the first time? This is PHP 5.1+ running under MAMP (Apache/MySQL). I don't have any auto loaders defined.

    Read the article

  • Generic that takes only numeric types (int double etc)?

    - by brandon
    In a program I'm working on, I need to write a function to take any numeric type (int, short, long etc) and shove it in to a byte array at a specific offset. There exists a Bitconverter.GetBytes() method that takes the numeric type and returns it as a byte array, and this method only takes numeric types. So far I have: private void AddToByteArray<T>(byte[] destination, int offset, T toAdd) where T : struct { Buffer.BlockCopy(BitConverter.GetBytes(toAdd), 0, destination, offset, sizeof(toAdd)); } So basically my goal is that, for example, a call to AddToByteArray(array, 3, (short)10) would take 10 and store it in the 4th slot of array. The explicit cast exists because I know exactly how many bytes I want it to take up. There are cases where I would want a number that is small enough to be a short to really take up 4 bytes. On the flip side, there are times when I want an int to be crunched down to just a single byte. I'm doing this to create a custom network packet, if that makes any ideas pop in to your heads. If the where clause of a generic supported something like "where T : int || long || etc" I would be ok. (And no need to explain why they don't support that, the reason is fairly obvious) Any help would be greatly appreciated! Edit: I realize that I could just do a bunch of overloads, one for each type I want to support... but I'm asking this question because I want to avoid precisely that :)

    Read the article

  • Oracle SQL: Query results from previous X isoweeks () (where X might be > 52)

    - by tommy-o-dell
    How could I adapt this query to show the previous 61 weeks? (still exlcluding the current week). My query currently shows me the total weekly sales for 2010 grouped by ISO Week and ISO Year (exlcuding the current week). select to_char(order_date,'IYYY') as iso_year, to_char(order_date,'IW') as iso_week, sum(sale_amount) from orders where to_char(order_date,'IW') <> to_char(SYSDATE) and to_char(order_date,'IYYY') = 2010 group by to_char(order_date,'IYYY') to_char(order_date,'IW') I realize I could probably just omit the "2010" requirement, order by desc and limit results to a certain bnumber of rows. But that just doesn't seem right! Much appreciate any help pointing me in the right direction!

    Read the article

  • Send data to webserver from C#, what's the most efficient way?

    - by Brian
    I am sending gps coordinates from a windows mobile phone to a webserver using a basic program I wrote in C#. The problem is the data plan on the phone only allows 4 MB per month. I was planning on updating the location every 10 seconds. Currently I am just creating a webrequest every 10 seconds to a php page on the server and the coordinates are passed over in the url, the php page saves them to the database. This generates about 1K of data per request, at this rate I will hit my data limit in less than a day. Is there a more efficient way to do this?

    Read the article

  • Do bit operations cause programs to run slower?

    - by flashnik
    I'm dealing with a problem which needs to work with a lot of data. Currently its values are represented as an unsigned int. I know that real values do not exceed a limit of 1000. Questions I can use unsigned short to store it. An upside to this is that it'll use less storage space to store the value. Will performance suffer? If I decided to store data as short but all the calling functions use int, it's recognized that I need to convert between these datatypes when storing or extracting values. Will performance suffer? Will the loss in performance be dramatic? If I decided to not use short but just 10 bits packed into an array of unsigned int. What will happen in this case comparing with previous ones?

    Read the article

  • iphone app store rss failed to open

    - by suvendu
    I write this, $opts = array( 'http' = array( 'user_agent' = 'PHP libxml agent', ) ); $context = stream_context_create($opts); libxml_set_streams_context($context); $doc = new DOMDocument(); $doc-load("http://ax.itunes.apple.com/WebObjects/MZStoreServices.woa/ws/RSS/toppaidapplications/limit=25/xml"); foreach ($doc-getElementsByTagName('item') as $node) { $title = $node-getElementsByTagName('title')-item(0)-nodeValue; $desc = $node-getElementsByTagName('description')-item(0)-nodeValue; $link = $node-getElementsByTagName('link')-item(0)-nodeValue; $date = $node-getElementsByTagName('pubDate')-item(0)-nodeValue; echo $title; echo '<br>'; echo $desc; } ?

    Read the article

  • How to query range of data in DB2 with highest performance?

    - by Fuangwith S.
    Usually, I need to retrieve data from a table in some range; for example, a separate page for each search result. In MySQL I use LIMIT keyword but in DB2 I don't know. Now I use this query for retrieve range of data. SELECT * FROM( SELECT SMALLINT(RANK() OVER(ORDER BY NAME DESC)) AS RUNNING_NO , DATA_KEY_VALUE , SHOW_PRIORITY FROM EMPLOYEE WHERE NAME LIKE 'DEL%' ORDER BY NAME DESC FETCH FIRST 20 ROWS ONLY ) AS TMP ORDER BY TMP.RUNNING_NO ASC FETCH FIRST 10 ROWS ONLY but I know it's bad style. So, how to query for highest performance?

    Read the article

  • Migrating from mssql to firebird: pro and cons

    - by user193655
    i am considering the migration for 3 reasons: 1) SQLSERVER installation is a nightmar, expecially for 1-user software. Software installs in 10 seconds, SQLServer in 1 hour. Firebird installation is much easier. 2) SQLSERVER runs on windows server only 3) My customers have all the express edition 4) i am not using any advanced feature, I am now starting using filestream, but the main reason for this is that Express eidtion has 4/10GB db size limit So these are all Pros of moving to Firebird. Which are the cons? I can also plan to support both platiforms, but this will backfire I fear.

    Read the article

  • Security Exception while implementing global search for Messaging

    - by Sunil
    I am trying to enable global search for messaging application (i.e., messages can be searched from home screen search box). I have followed all the steps given in http://developer.android.com/reference/android/app/SearchManager.html I am getting the following exception 04-16 12:49:26.917: ERROR/DatabaseUtils(102): java.lang.SecurityException: Permission Denial: reading com.android.providers.telephony.MmsSmsProvider uri content://mms-sms/search_suggest_query/m?limit=58 from pid=106, uid=10000 requires android.permission.READ_SMS I have set permission in MmsSmsProvider.java file for read, write sms and global search, but still I get this error. Can anyone help. Regards, Sunil.

    Read the article

  • HttpTunneling a TCPClient application

    - by user360116
    We have a custom chat application(c#) which uses TCPClient. We are having problem on clients who are behind Firewall or proxy. We know that these client can browse the internet without a problem so we decided to change our TCPClient application so that It uses HTTP messages to communicate. Will it be enough just to wrap our text massages with standard HTML tags and HTTP headers? We need a long lasting connection. Does keep-alive have a limit? Do firewalls or proxies have time limits for "alive" connections.

    Read the article

  • How to generate .json file with PHP?

    - by Srinivas Tamada
    CREATE TABLE Posts { id INT PRIMARY KEY AUTO_INCREMENT, title VARCHAR(200), url VARCHAR(200) } json.php code <?php $sql=mysql_query("select * from Posts limit 20"); echo '{"posts": ['; while($row=mysql_fetch_array($sql)) { $title=$row['title']; $url=$row['url']; echo ' { "title":"'.$title.'", "url":"'.$url.'" },'; } echo ']}'; ?> I have to generate results.json file.

    Read the article

  • SQL Server - Select one random record not showing duplicates

    - by Lukes123
    I have two tables, events and photos, which relate together via the 'Event_ID' column. I wish to select ONE random photo from each event and display them. How can I do this? I have the following which displays all the photos which are associated. How can I limit it to one per event? SELECT Photos.Photo_Id, Photos.Photo_Path, Photos.Event_Id, Events.Event_Title, Events.Event_StartDate, Events.Event_EndDate FROM Photos, Events WHERE Photos.Event_Id = Events.Event_Id AND Events.Event_EndDate < GETDATE() AND Events.Event_EndDate IS NOT NULL AND Events.Event_StartDate IS NOT NULL ORDER BY NEWID() Thanks Luke Stratton

    Read the article

  • MySQL: SELECT highest column value when WHERE finds similar entries

    - by Ike
    My question is comparable to this one, but not quite the same. I have a database with a huge amount of books, with different editions of some of the same book titles. I'm looking for an SQL statement giving me the highest edition number of each of the titles I'm selecting with a WHERE clause (to find specific book series). Here's what the table looks like: |id|title |edition|year| |--|-------------------------|-------|----| |01|Serie One Title One |1 |2007| |02|Serie One Title One |2 |2008| |03|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |05|Serie One Title Three |1 |2008| |06|Serie One Title Three |2 |2009| |07|Serie One Title Three |3 |2010| |08|Serie One Title Three |4 |2011| |--|-------------------------|-------|----| The result I'm looking for is this: |id|title |edition|year| |--|-------------------------|-------|----| |03|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |08|Serie One Title Three |4 |2011| |--|-------------------------|-------|----| The closest I got was using this statement: select id, title, max(edition), max(year) from books where title like "serie one%" group by name; but it returns the highest edition and year and includes the first id it finds: |--|-----------------------|-------|----| |01|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |05|Serie One Title Three |4 |2011| |--|-----------------------|-------|----| This fancy join also comes close, but doesn't give the right result: select b.id, b.title, b.edition, b.year from books b inner join (select name, max(edition) as maxedition from books group by title) g on b.edition = g.maxedition where b.title like "serie one%" group by title; Using this I'm getting unique titles, but mostly old editions.

    Read the article

  • Database design efficiency with 1 to many relationships limited 1 to 3

    - by Joe
    This is in mysql, but its a database design issue. If you have a one to many relationship, like a bank customer to bank-accounts, typically you would have the table that records the bank-account information have a foreign key that keeps track of the relationship between account and customer. Now this follows the 3rd normal form thing and is a widely accepted way of doing it. Now lets say that you are going to limit a user to only having 3 accounts. The current database implementation will support this and nothing would need to change. But another way to do this would have 3 coloms in the account table that have the id of the 3 respective accounts in them. By the way this violates 1st normal form of db design. The question is what would be the advantage and disadvantages of having the user account relationship recored in this way over the traditional?

    Read the article

  • Efficient way to combine results of two database queries.

    - by ensnare
    I have two tables on different servers, and I'd like some help finding an efficient way to combine and match the datasets. Here's an example: From server 1, which holds our stories, I perform a query like: query = """SELECT author_id, title, text FROM stories ORDER BY timestamp_created DESC LIMIT 10 """ results = DB.getAll(query) for i in range(len(results)): #Build a string of author_ids, e.g. '1314,4134,2624,2342' But, I'd like to fetch some info about each author_id from server 2: query = """SELECT id, avatar_url FROM members WHERE id IN (%s) """ values = (uid_list) results = DB.getAll(query, values) Now I need some way to combine these two queries so I have a dict that has the story as well as avatar_url and member_id. If this data were on one server, it would be a simple join that would look like: SELECT * FROM members, stories WHERE members.id = stories.author_id But since we store the data on multiple servers, this is not possible. What is the most efficient way to do this? Thanks.

    Read the article

  • Are tags considered requirements? [closed]

    - by krunk
    I'm new to stack overflow, made a few responses. I responded to a question that was something like: "I need to do X, I found a sed one liner that almost does it, but not quite" And was tagged 'sed'. I assumed the user just wanted a solution and tagged it with sed because it was a possible answer. So I suggested an alternate way using another tool that was more concise and didn't involve regex (another one-liner). I received a down vote for not meeting the requirement of the user. Since I'd like to make sure I conform to good forum etiquette, my question is: Are tags considered hard requirements that should limit the scope of responses? (within reason of course, a .NET question with a .NET tag obviously shouldn't receive a ruby answer).

    Read the article

  • How to find/display the Upload File limits on IIS with ASP.NET?

    - by NVRAM
    I have a web service on which the end users will be uploading ZIP archives that can be very large (one test file is over 200MB). I'd like to handle oversized files proactively and size-limited upload failures gracefully. Since the web app will be deployed on customers' machines, so I cannot easily ensure that the configuration matches any fixed size. I've documented how to use the appcmd command for them to set the requestLimits.maxAllowedContentLength value beyond the 30MB default. But I'd like to handle it in the web app; I'm hoping for two things: To show the current limit on the page where they initiate the file upload, something along the lines of: Each file upload is limited to 15MB. If your archive is larger, (etc., etc., etc.) To give a meaningful error when that size is exceeded. Currently, it takes a long time for the data to be sent, and then I see a misleading 404 page. Any thoughts?

    Read the article

  • SharePoint SLK and T-SQL xp_cmdshell safety

    - by Mitchell Skurnik
    I am looking into a TSQL command called "xp_cmdshell" to use to monitor a change to a the SLK (SharePoint Learning Kit) database and then execute a batch or PowerShell script that will trigger some events that I need. (It is bad practice to modify SharePoint's database directly, so I will be using its API) I have been reading on various blogs and MSDN that there are some security concerns with this approach. The sites suggest that you limit security so the command can be executed by only a specific user role. What other tips/suggestions would you recommend with using "xp_cmdshell"? Or should I go about this another way and create a script or console application that constantly checks if a change has been made? I am running Server 2008 with SQL 2008.

    Read the article

  • Sub Query making Query slow.

    - by Muhammad Kashif Nadeem
    Please copy and paste following script. DECLARE @MainTable TABLE(MainTablePkId int) INSERT INTO @MainTable SELECT 1 INSERT INTO @MainTable SELECT 2 DECLARE @SomeTable TABLE(SomeIdPk int, MainTablePkId int, ViewedTime1 datetime) INSERT INTO @SomeTable SELECT 1, 1, DATEADD(dd, -10, getdate()) INSERT INTO @SomeTable SELECT 2, 1, DATEADD(dd, -9, getdate()) INSERT INTO @SomeTable SELECT 3, 2, DATEADD(dd, -6, getdate()) DECLARE @SomeTableDetail TABLE(DetailIdPk int, SomeIdPk int, Viewed INT, ViewedTimeDetail datetime) INSERT INTO @SomeTableDetail SELECT 1, 1, 1, DATEADD(dd, -7, getdate()) INSERT INTO @SomeTableDetail SELECT 2, 2, NULL, DATEADD(dd, -6, getdate()) INSERT INTO @SomeTableDetail SELECT 3, 2, 2, DATEADD(dd, -8, getdate()) INSERT INTO @SomeTableDetail SELECT 4, 3, 1, DATEADD(dd, -6, getdate()) SELECT m.MainTablePkId, (SELECT COUNT(Viewed) FROM @SomeTableDetail), (SELECT TOP 1 s2.ViewedTimeDetail FROM @SomeTableDetail s2 INNER JOIN @SomeTable s1 ON s2.SomeIdPk = s1.SomeIdPk WHERE s1.MainTablePkId = m.MainTablePkId) FROM @MainTable m Above given script is just sample. I have long list of columns in SELECT and around 12+ columns in Sub Query. In my From clause there are around 8 tables. To fetch 2000 records full query take 21 seconds and if I remove Subquiries it just take 4 seconds. I have tried to optimize query using 'Database Engine Tuning Advisor' and on adding new advised indexes and statistics but these changes make query time even bad. Note: As I have mentioned that this is test data to explain my question the real data has lot of tables joins columns but without Sub-Query the results us fine. Any help thanks.

    Read the article

  • How to declare a variable that spans multiple lines

    - by Chris Wilson
    I'm attempting to initialise a string variable in C++, and the value is so long that it's going to exceed the 80 character per line limit I'm working to, so I'd like to split it to the next line, but I'm not sure how to do that. I know that when splitting the contents of a stream across multiple lines, the syntax goes like cout << "This is a string" << "This is another string"; Is there an equivalent for variable assignment, or do I have to declare multiple variables and concatenate them? Edit: I misspoke when I wrote the initial question. When I say 'next line', I'm just meaning the next line of the script. When it is printed upon execution, I would like it to be on the same line.

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >