Search Results

Search found 18542 results on 742 pages for 'nhibernate query'.

Page 686/742 | < Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >

  • django: results in in_bulk style without IDs

    - by valya
    in django 1.1.1, Place.objects.in_bulk() does not work and Place.objects.in_bulk(range(1, 100)) works and returns a dictionary of Ints to Places with indexes - primary keys. How to avoid using range in this situation (and avoid using a special query for ids, I just want to get all objects in this dictionary format) >>> Place.objects.in_bulk() Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/lib/python2.5/site-packages/Django-1.1.1-py2.5.egg/django/db/models/manager.py", line 144, in in_bulk return self.get_query_set().in_bulk(*args, **kwargs) TypeError: in_bulk() takes exactly 2 arguments (1 given) >>> Place.objects.in_bulk(range(1, 100)) {1L: <Place: "??? ????">, 3L: <Place: "???????????? ?????">, 4L: <Place: "????????? "??????"">, 5L: <Place: "????????? "??????"">, 8L: <Place: "????????? "??????????????"">, 9L: <Place: "??????? ????????">, 10L: <Place: "????????? ???????">, 11L: <Place: "??????????????? ???">, 14L: <Place: "????? ????? ??????">}

    Read the article

  • question about MySQL transaction and trigger

    - by WilliamLou
    I quickly browsed MySQL manual but didn't find the exact information about my question. Here is my question: if I have a InnoDB table A with two triggers triggered by 'AFTER INSERT ON A' and 'AFTER UPDATE ON A'. More specifically, For example: one trigger is defined as: CREATE TRIGGER test_trigger AFTER INSERT ON A FOR EACH ROW BEGIN INSERT INTO B SELECT * FROM A WHERE A.col1 = NEW.col1 END; You can ignore the query between BEGIN AND END, basically I mean this trigger will insert several rows into table B which is also a InnoDB table. Now, if I started a transaction and then insert many rows, say: 10K rows, into table A. If there is no trigger associated with table A, all these inserts are atomic, that's for sure. Now, if table A is associated with several insert/update triggers which insert/update many rows to table B and/or table C etc.. will all these inserts and/or updates are still all atomic? I think it's still atomic, but it's kind of difficult to test and I can't find any explanations in the Manual. Anyone can confirm this? Thanks a lot!

    Read the article

  • entity framework insert bug

    - by tmfkmoney
    I found a previous question which seemed related but there's no resolution and it's 5 months old so I've opened my own version. http://stackoverflow.com/questions/1545583/entity-framework-inserting-new-entity-via-objectcontext-does-not-use-existing-e When I insert records into my database with the following it works fine for a while and then eventually it starts inserting null values in the referenced field. This typically happens after I do an update on my model from the database although not always after I do an update. I'm using a MySQL database for this. I have debugged the code and the values are being set properly before the save event. They're just not getting inserted properly. I can always fix this issue by re-creating the model without touching any of my code. I have to recreate the entire model, though. I can't just dump the relevant tables and re-add them. This makes me think it doesn't have anything to do with my code but something with the entity framework. Does anyone else have this problem and/or solved it? using (var db = new MyModel()) { var stocks = from record in query let ticker = record.Ticker select new { company = db.Companies.FirstOrDefault(c => c.ticker == ticker), price = Convert.ToDecimal(record.Price), date_stamp = Convert.ToDateTime(record.DateTime) }; foreach (var stock in stocks) { if (stock.company != null) { var price = new StockPrice { Company = stock.company, price = stock.price, date_stamp = stock.date_stamp }; db.AddToStockPrices(price); } } db.SaveChanges(); }

    Read the article

  • collation conflict SQL/SERVER 2008

    - by vikitor
    Hello, I've been going around this but I haven't found a solution for my problem. My sql query is: SELECT dbo.Country.CtyRecID, dbo.Country.CtyShort, dbo.Notification.NotRecID, dbo.Notification.NotName, dbo.TemporalSuspension.TCtsCode, dbo.TemporalSuspension.TCtsCodeRecID, dbo.TaxPhylum.PhyName AS Taxon, dbo.TemporalSuspension.TCtsNotes, dbo.TemporalSuspension.TCtsRecID, dbo.TemporalSuspension.TCtsKgmRecID, CASE dbo.TemporalSuspension.TCtsKgmRecID WHEN 1 THEN 'Animals' WHEN 2 THEN 'Plants' ELSE 'All' END AS Kingdom FROM dbo.TemporalSuspension INNER JOIN dbo.Notification ON dbo.TemporalSuspension.TCtsStartNotRecID = dbo.Notification.NotRecID INNER JOIN dbo.Country ON dbo.TemporalSuspension.TCtsCtyRecID = dbo.Country.CtyRecID INNER JOIN dbo.TaxPhylum ON dbo.TemporalSuspension.TCtsCodeRecID = dbo.TaxPhylum.PhyRecID AND dbo.TemporalSuspension.TCtsCode LIKE 'PHY' UNION ALL SELECT dbo.Country.CtyRecID, dbo.Country.CtyShort, dbo.Notification.NotRecID, dbo.Notification.NotName, dbo.TemporalSuspension.TCtsCode, dbo.TemporalSuspension.TCtsCodeRecID, dbo.TaxClass.ClaName AS Taxon, dbo.TemporalSuspension.TCtsNotes, dbo.TemporalSuspension.TCtsRecID, dbo.TemporalSuspension.TCtsKgmRecID, CASE dbo.TemporalSuspension.TCtsKgmRecID WHEN 1 THEN 'Animals' WHEN 2 THEN 'Plants' ELSE 'All' END AS Kingdom FROM dbo.TemporalSuspension INNER JOIN dbo.Notification ON dbo.TemporalSuspension.TCtsStartNotRecID = dbo.Notification.NotRecID INNER JOIN dbo.Country ON dbo.TemporalSuspension.TCtsCtyRecID = dbo.Country.CtyRecID INNER JOIN dbo.TaxClass ON dbo.TemporalSuspension.TCtsCodeRecID = dbo.TaxClass.ClaRecID AND dbo.TemporalSuspension.TCtsCode LIKE 'CLA' But I don't understand why it doesn't work, I keep getting this error : Cannot resolve collation conflict for column 7 in SELECT statement. What's wrong? I've used this other times and I never got this problem. thanks

    Read the article

  • DB Design to store custom fields for a table

    - by Fazal
    Hi All, this question came up based on the responses I got for the question http://stackoverflow.com/questions/2785033/getting-wierd-issue-with-to-number-function-in-oracle As everyone suggested that storing Numeric values in VARCHAR2 columns is not a good practice (which I totally agree with), I am wondering about a basic Design choice our team has made and whether there are better way to design. Problem Statement : We Have many tables where we want to give certain number of custom fields. The number of required custom fields is known, but what kind of attribute is mapped to the column is available to the user E.g. I am putting down a hypothetical scenario below Say you have a laptop which stores 50 attribute values for every laptop record. Each laptop attributes are created by the some admin who creates the laptop. A user created a laptop product lets say lap1 with attributes String, String, numeric, numeric, String Second user created laptop lap2 with attributes String,numeric,String,String,numeric Currently there data in our design gets persisted as following Laptop Table Id Name field1 field2 field3 field4 field5 1 lap1 lappy lappy 12 13 lappy 2 lap2 lappy2 13 lappy2 lapp2 12 This example kind of simulates our requirement and our design Now here if somebody is lookinup records for lap2 table doing a comparison on field2, We need to apply TO_NUMBER. select * from laptop where name='lap2' and TO_NUMBER(field2) < 15 TO_NUMBER fails in some cases when query plan decides to first apply to_number instead of the other filter. QUESTION Is this a valid design? What are the other alternative ways to solve this problem One of our team mates suggested creating tables on the fly for such cases. Is that a good idea How do popular ORM tools give custom fields or flex fields handling? I hope I was able to make sense in the question. Sorry for such a long text.. This causes us to use TO_NUMBER when queryio

    Read the article

  • Get part of array string

    - by user1560295
    Hello my output PHP code is : Array ( [country] => BG - Bulgaria ) ... and he comes from here : <?php $ip = $_SERVER['REMOTE_ADDR']; print_r(geoCheckIP($ip)); //Array ( [domain] => dslb-094-219-040-096.pools.arcor-ip.net [country] => DE - Germany [state] => Hessen [town] => Erzhausen ) //Get an array with geoip-infodata function geoCheckIP($ip) { //check, if the provided ip is valid if(!filter_var($ip, FILTER_VALIDATE_IP)) { throw new InvalidArgumentException("IP is not valid"); } //contact ip-server $response=@file_get_contents('http://www.netip.de/search?query='.$ip); if (empty($response)) { throw new InvalidArgumentException("Error contacting Geo-IP-Server"); } //Array containing all regex-patterns necessary to extract ip-geoinfo from page $patterns=array(); $patterns["country"] = '#Country: (.*?)&nbsp;#i'; //Array where results will be stored $ipInfo=array(); //check response from ipserver for above patterns foreach ($patterns as $key => $pattern) { //store the result in array $ipInfo[$key] = preg_match($pattern,$response,$value) && !empty($value[1]) ? $value[1] : ''; } return $ipInfo; } ?> How can I get ONLY the name of the Country like in my case "Bulgaria"? I think it will happen with preg_replace or substr but i dont know what is the better solution now.

    Read the article

  • authorise user from mysql database

    - by Jacksta
    I suck at php, and cant find the error here. The script gets 2 variables "username" and "password" from a html from then check them against a MySQL databse. When I run this I get the follow error "Query was empty" <? if ((!$_POST[username]) || (!$_POST[password])) { header("Location: show_login.html"); exit; } $db_name = "testDB"; $table_name = "auth_users"; $connection = @mysql_connect("localhost", "admin", "pass") or die(mysql_error()); $db = @mysql_select_db($db_name, $connection) or die(mysql_error()); $slq = "SELECT * FROM $table_name WHERE username ='$_POST[username]' AND password = password('$_POST[password]')"; $result = @mysql_query($sql, $connection) or die(mysql_error()); $num = mysql_num_rows($result); if ($num != 0) { $msg = "<p>Congratulations, you're authorised!</p>"; } else { header("Location: show_login.html"); exit; } ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Secret Area</title> </head> <body> <? echo "$msg"; ?> </body> </html>

    Read the article

  • SQL connection to database repeating

    - by user175084
    ok now i am using the SQL database to get the values from different tables... so i make the connection and get the values like this: DataTable dt = new DataTable(); SqlConnection connection = new SqlConnection(); connection.ConnectionString = ConfigurationManager.ConnectionStrings["XYZConnectionString"].ConnectionString; connection.Open(); SqlCommand sqlCmd = new SqlCommand("SELECT * FROM Machines", connection); SqlDataAdapter sqlDa = new SqlDataAdapter(sqlCmd); sqlCmd.Parameters.AddWithValue("@node", node); sqlDa.Fill(dt); connection.Close(); so this is one query on the page and i am calling many other queries on the page. So do i need to open and close the connection everytime...??? also if not this portion is common in all: DataTable dt = new DataTable(); SqlConnection connection = new SqlConnection(); connection.ConnectionString = ConfigurationManager.ConnectionStrings["XYZConnectionString"].ConnectionString; connection.Open(); can i like put it in one function and call it instead.. the code would look cleaner... i tried doing that but i get errors like: Connection does not exist in the current context. any suggestions??? thanks

    Read the article

  • PHP/MySQL time zone migration

    - by El Yobo
    I have an application that currently stores timestamps in MySQL DATETIME and TIMESTAMP values. However, the application needs to be able to accept data from users in multiple time zones and show the timestamps in the time zone of other users. As such, this is how I plan to amend the application; I would appreciate any suggestions to improve the approach. Database modifications All TIMESTAMPs will be converted to DATETIME values; this is to ensure consistency in approach and to avoid having MySQL try to do clever things and convert time zones (I want to keep the conversion in PHP, as it involves less modification to the application, and will be more portable when I eventually manage to escape from MySQL). All DATETIME values will be adjusted to convert them to UTC time (currently all in Australian EST) Query modifications All usage of NOW() to be replaced with UTC_TIMESTAMP() in queries, triggers, functions, etc. Application modifications The application must store the time zone and preferred date format (e.g. US vs the rest of the world) All timestamps will be converted according to the user settings before being displayed All input timestamps will be converted to UTC according to the user settings before being input Additional notes Converting formats will be done at the application level for several main reasons The approach to converting time zones varies from DB to DB, so handing it there will be non-portable (and I really hope to be migrating away from MySQL some time in the not-to-distant future). MySQL TIMESTAMPs have limited ranges to the permitted dates (~1970 to ~2038) MySQL TIMESTAMPs have other undesirable attributes, including bizarre auto-update behaviour (if not carefully disabled) and sensitivity to the server zone settings (and I suspect I might screw these up when I migrate to Amazon later in the year). Is there anything that I'm missing here, or does anyone have better suggestions for the approach?

    Read the article

  • Product Name Print Several times, How to fix.?

    - by mans
    i had added the following Opencart module for my order report list... http://www.opencart.com/index.php?route=extension/extension/info&extension_id=3597&filter_search=order%20list%20filter%20model&page=4 I have problems with the column "Products". If there are more than one option the products name prints several times. So if I got a product with three options the product name prints three times. Is there any way to fix this problem? i want print product name and model number only once, any idea.? i will attach the results what i got now... this is my sql query... public function getOrders($data = array()) { $sql = "select o.order_id,o.email,o.telephone,CONCAT(o.shipping_address_1, ' ', o.shipping_address_2) AS address,CONCAT(o.firstname, ' ', o.lastname) AS customer,o.payment_zone AS state,o.payment_address_2 AS block, o.payment_address_1 AS address,o.payment_postcode AS postcode,(SELECT os.name FROM " . DB_PREFIX . "order_status os WHERE os.order_status_id = o.order_status_id AND os.language_id = '" . (int)$this->config->get('config_language_id') . "') AS status,o.payment_city AS city,GROUP_CONCAT(pd.name) AS pdtname,GROUP_CONCAT(op.model) AS model,o.date_added,sum(op.quantity) AS quantity,GROUP_CONCAT(opt.value ) AS options, GROUP_CONCAT(opt.order_product_id ) AS ordprdid,GROUP_CONCAT(op.order_product_id ) AS optprdid, GROUP_CONCAT(op.quantity) AS opquantity from `" . DB_PREFIX . "order` o LEFT JOIN " . DB_PREFIX . "order_product op ON (op.order_id = o.order_id) LEFT JOIN " . DB_PREFIX . "product_description pd ON (pd.product_id = op.product_id and pd.language_id = '" . (int)$this->config->get('config_language_id') . "') LEFT JOIN " . DB_PREFIX . "order_option opt ON (opt.order_product_id = op.order_product_id) "; Product Name = GROUP_CONCAT(pd.name) AS pdtname,

    Read the article

  • How to efficiently store and update binary data in Mongodb?

    - by Rocketman
    I am storing a large binary array within a document. I wish to continually add bytes to this array and sometimes change the value of existing bytes. I was looking for some $append_bytes and $replace_bytes type of modifiers but it appears that the best I can do is $push for arrays. It seems like this would be doable by performing seek-write type operations if I had access somehow to the underlying bson on disk, but it does not appear to me that there is anyway to do this in mongodb (and probably for good reason). If I were instead to just query this binary array, edit or add to it, and then update the document by rewriting the entire field, how costly will this be? Each binary array will be on the order of 1-2MB, and updates occur once every 5 minutes and across 1000s of documents. Worse, yet there is no easy way to spread these out (in time) and they will usually be happening close to one another on the 5 minute intervals. Does anyone have a good feel for how disastrous this will be? Seems like it would be problematic. An alternative would be to store this binary data as separate files on disk, implement a thread pool to efficiently manipulate the files on disk, and reference the filename from my mongodb document. (I'm using python and pymongo so I was looking at pytables). I'd prefer to avoid this though if possible. Is there any other alternative that I am overlooking here? Thanks in advnace.

    Read the article

  • Program freezing when syncing a ldap database (100+ entries added)

    - by djerry
    Hey guys, I'm updating a ldap database. I need to add a list of users to the db. I've written a simple foreach loop. There are about 180 users i need to add, but at the 128th user, the program freezes. I know ldap is really used for querying (fast), and that adding and modifying entries will not go as smooth as a search query, but is it normal that the program freezes while doing this? I'll post some code just in case. public static void SyncLDAPWithMySql(Novell.Directory.Ldap.LdapConnection _conn) { List<User> users = GetUsers(); int iteller = 0; foreach (User user in users) { if (!UserAlreadyInLdap(user, _conn)) { TelUser teluser = new TelUser(); teluser.Telephone = user.E164; teluser.Uid = user.E164; teluser.Company = "/"; teluser.Dn = ""; teluser.Name = "/"; teluser.DisplayName = "/"; teluser.FirstName = "/"; TelephoneDA.InsertUser(_conn, teluser ); } Console.WriteLine(iteller + " : " + user.E164); iteller++; } } private static bool UserAlreadyInLdap(User user, Novell.Directory.Ldap.LdapConnection _conn) { List<TelUser> users = TelephoneDA.GetAllEntries(_conn); foreach (TelUser teluser in users) { if (teluser.Telephone.Equals(user.E164)) return true; } return false; } public static int InsertUser(LdapConnection conn, TelUser user) { int iResponse = IsTelNumberUnique(conn, user.Dn, user.Telephone); if (iResponse == 0) { LdapAttributeSet attrSet = MakeAttSet(user); string dnForPhonebook = configurationManager.AppSettings.Get("phonebookDn"); LdapEntry ent = new LdapEntry("uid=" + user.Uid + "," + dnforPhonebook, attrSet); try { conn.Add(ent); } catch (Exception ex) { Console.WriteLine(ex.Message); } } return iResponse; } Am i adding too many entries at a time??? Thanks in advance.

    Read the article

  • Beginner SQL section: avoiding repeated expression

    - by polygenelubricants
    I'm entirely new at SQL, but let's say that on the StackExchange Data Explorer, I just want to list the top 15 users by reputation, and I wrote something like this: SELECT TOP 15 DisplayName, Id, Reputation, Reputation/1000 As RepInK FROM Users WHERE RepInK > 10 ORDER BY Reputation DESC Currently this gives an Error: Invalid column name 'RepInK', which makes sense, I think, because RepInK is not a column in Users. I can easily fix this by saying WHERE Reputation/1000 > 10, essentially repeating the formula. So the questions are: Can I actually use the RepInK "column" in the WHERE clause? Do I perhaps need to create a virtual table/view with this column, and then do a SELECT/WHERE query on it? Can I name an expression, e.g. Reputation/1000, so I only have to repeat the names in a few places instead of the formula? What do you call this? A substitution macro? A function? A stored procedure? Is there an SQL quicksheet, glossary of terms, language specification, anything I can use to quickly pick up the syntax and semantics of the language? I understand that there are different "flavors"?

    Read the article

  • Sql Compact and __sysobjects

    - by Scott Wisniewski
    I have some SQL Compact queries that create tables inside of transaction. This is mainly because I need to simulate temporary tables, which SQL Compact does not support. I do this by creating a real table, and then dropping it at the end of the transaction. This mostly works. Sometimes, however, when creating the tables Sql Compact will try to acquire PAGE level locks on the __sysobjects table. If there are several concurrent queries running that create "temp" tables, the attempt to acquire a page lock can result in a dead lock followed by a SqlLockTimeout exception. For normal tables I could fix this using a "with (rowlock)" hint. However, because I'm not writing the query to insert into __sysobjets (SQL server does that in response to "create table") I can't do this. Does anyone know of a way I could get around this? I've thought about pulling the table creation out of the transaction, but that opens up the possibility of phantom temporary tables that I'd then need to clean up regularly. Ideally I'd like to avoid that if possible.

    Read the article

  • Can I delay the keyup event for jquery?

    - by Paul
    I'm using the rottentomatoes movie API in conjunction with twitter's typeahead plugin using bootstrap 2.0. I've been able to integerate the API but the issue I'm having is that after every keyup event the API gets called. This is all fine and dandy but I would rather make the call after a small pause allowing the user to type in several characters first. Here is my current code that calls the API after a keyup event: var autocomplete = $('#searchinput').typeahead() .on('keyup', function(ev){ ev.stopPropagation(); ev.preventDefault(); //filter out up/down, tab, enter, and escape keys if( $.inArray(ev.keyCode,[40,38,9,13,27]) === -1 ){ var self = $(this); //set typeahead source to empty self.data('typeahead').source = []; //active used so we aren't triggering duplicate keyup events if( !self.data('active') && self.val().length > 0){ self.data('active', true); //Do data request. Insert your own API logic here. $.getJSON("http://api.rottentomatoes.com/api/public/v1.0/movies.json?callback=?&apikey=MY_API_KEY&page_limit=5",{ q: encodeURI($(this).val()) }, function(data) { //set this to true when your callback executes self.data('active',true); //Filter out your own parameters. Populate them into an array, since this is what typeahead's source requires var arr = [], i=0; var movies = data.movies; $.each(movies, function(index, movie) { arr[i] = movie.title i++; }); //set your results into the typehead's source self.data('typeahead').source = arr; //trigger keyup on the typeahead to make it search self.trigger('keyup'); //All done, set to false to prepare for the next remote query. self.data('active', false); }); } } }); Is it possible to set a small delay and avoid calling the API after every keyup?

    Read the article

  • SQL putting two single quotes around datetime fields and fails to insert record

    - by user82613
    I am trying to INSERT into an SQL database table, but it doesn't work. So I used the SQL server profiler to see how it was building the query; what it shows is the following: declare @p1 int set @p1=0 declare @p2 int set @p2=0 declare @p3 int set @p3=1 exec InsertProcedureName @ConsumerMovingDetailID=@p1 output, @UniqueID=@p2 output, @ServiceID=@p3 output, @ProjectID=N'0', @IPAddress=N'66.229.112.168', @FirstName=N'Mike', @LastName=N'P', @Email=N'[email protected]', @PhoneNumber=N'(254)637-1256', @MobilePhone=NULL, @CurrentAddress=N'', @FromZip=N'10005', @MoveInAddress=N'', @ToZip=N'33067', @MovingSize=N'1', @MovingDate=''2009-04-30 00:00:00:000'', /* Problem here ^^^ */ @IsMovingVehicle=0, @IsPackingRequired=0, @IncludeInSaveologyPlanner=1 select @p1, @p2, @p3 As you can see, it puts a double quote two pairs of single quotes around the datetime fields, so that it produces a syntax error in SQL. I wonder if there is anything I must configure somewhere? Any help would be appreciated. Here is the environment details: Visual Studio 2008 .NET 3.5 MS SQL Server 2005 Here is the .NET code I'm using.... //call procedure for results strStoredProcedureName = "usp_SMMoverSearchResult_SELECT"; Database database = DatabaseFactory.CreateDatabase(); DbCommand dbCommand = database.GetStoredProcCommand(strStoredProcedureName); dbCommand.CommandTimeout = DataHelper.CONNECTION_TIMEOUT; database.AddInParameter(dbCommand, "@MovingDetailID", DbType.String, objPropConsumer.ConsumerMovingDetailID); database.AddInParameter(dbCommand, "@FromZip", DbType.String, objPropConsumer.FromZipCode); database.AddInParameter(dbCommand, "@ToZip", DbType.String, objPropConsumer.ToZipCode); database.AddInParameter(dbCommand, "@MovingDate", DbType.DateTime, objPropConsumer.MoveDate); database.AddInParameter(dbCommand, "@PLServiceID", DbType.Int32, objPropConsumer.ServiceID); database.AddInParameter(dbCommand, "@FromAreaCode", DbType.String, pFromAreaCode); database.AddInParameter(dbCommand, "@FromState", DbType.String, pFromState); database.AddInParameter(dbCommand, "@ToAreaCode", DbType.String, pToAreaCode); database.AddInParameter(dbCommand, "@ToState", DbType.String, pToState); DataSet dstSearchResult = new DataSet("MoverSearchResult"); database.LoadDataSet(dbCommand, dstSearchResult, new string[] { "MoverSearchResult" });

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

  • MySQL developer here -- Nesting with select * finicky in Oracle 10g?

    - by John Sullivan
    I'm writing a simple diagnostic query then attempting to execute it in the Oracle 10g SQL Scratchpad. EDIT: It will not be used in code. I'm nesting a simple "Select *" and it's giving me errors. In the SQL Scratchpad for Oracle 10g Enterprise Manager Console, this statement runs fine. SELECT * FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id and sql.sql_text <> ' ' If I try to wrap that up in Select * from () tb2 I get an error, "ORA-00918: Column Ambiguously Defined". I didn't think that could ever happen with this kind of statement so I am a bit confused. select * from (SELECT * FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id and sql.sql_text <> ' ') tb2 You should always be able to select * from the result set of another select * statement using this structure as far as I'm aware... right? Is Oracle/10g/the scratchpad trying to force me to accept a certain syntactic structure to prevent excessive nesting? Is this a bug in scratchpad or something about how oracle works?

    Read the article

  • PostgreSQL: return select count(*) from old_ids;

    - by Alexander Farber
    Hello, please help me with 1 more PL/pgSQL question. I have a PHP-script run as daily cronjob and deleting old records from 1 main table and few further tables referencing its "id" column: create or replace function quincytrack_clean() returns integer as $BODY$ begin create temp table old_ids (id varchar(20)) on commit drop; insert into old_ids select id from quincytrack where age(QDATETIME) > interval '30 days'; delete from hide_id where id in (select id from old_ids); delete from related_mks where id in (select id from old_ids); delete from related_cl where id in (select id from old_ids); delete from related_comment where id in (select id from old_ids); delete from quincytrack where id in (select id from old_ids); return select count(*) from old_ids; end; $BODY$ language plpgsql; And here is how I call it from the PHP script: $sth = $pg->prepare('select quincytrack_clean()'); $sth->execute(); if ($row = $sth->fetch(PDO::FETCH_ASSOC)) printf("removed %u old rows\n", $row['count']); Why do I get the following error? SQLSTATE[42601]: Syntax error: 7 ERROR: syntax error at or near "select" at character 9 QUERY: SELECT select count(*) from old_ids CONTEXT: SQL statement in PL/PgSQL function "quincytrack_clean" near line 23 Thank you! Alex

    Read the article

  • The XPath @root-node-position attribute info

    - by Igor Savinkin
    I couldn't find the @root-node-position XPath attribute info. Would you give me a link of where i can read about it? Is it XPath 2.0? The code (not mine) is ../preceding-sibling::div[1]/div[@root-node-position]/div applied to this HTML: <div class="left"> <div class='prod2'> <div class='name'>Dell Latitude D610-1.73 Laptop Wireless Computer </div>2 GHz Intel Pentium M, 1 GB DDR2 SDRAM, 40 GB </div> <div class='prod1'> <div class='name'>Samsung Chromebook (Wi-Fi, 11.6-Inch) </div>1.7 GHz, 2 GB DDR3 SDRAM, 16 GB </div> </div> <div class="right"> <div class='price2'>$239.95</div> <div class='price1 best'>$249.00</div> </div> Firstly i fetch a price text under class='right' with this query : //DIV[contains(@class,'best')] and then i apply the above mentioned XPath with @root-node-attribute under class='left' to retrieve the rest of the record info.

    Read the article

  • How do I avoid having JSONP returns cached in an HTML5 offline application?

    - by Kent Brewster
    I had good luck with cached offline apps until I tried including data from JSONP endpoints. Here's a tiny example, which loads a single movie from the new Netflix widget API: <!DOCTYPE html> <html manifest="main.manifest"> <head> <title>Testing!</title> </head> <body> <p>Attempting to recover a title from Netflix now...</p> <script type="text/javascript"> function ping(r) { alert('API reply: ' + r.catalog_title.title.regular); } var cb = new Date().getTime(); var s = document.createElement('SCRIPT'); s.src = 'http://movi.es/7Soq?v=2.0&output=json&expand=widget&callback=ping&cacheBuster=' + cb; alert('SCRIPT src: ' + s.src); s.type = 'text/javascript'; document.getElementsByTagName('BODY')[0].appendChild(s); </script> </body> </html> ... and here's the contents of my manifest, main.manifest, which contains no files and is only there so my browser knows to cache the calling HTML file. CACHE MANIFEST Yes, I've confirmed that my server is sending the manifest down with the correct content type, text/cache-manifest. The app works fine--meaning both alerts show--the first time I run it, but subsequent runs, even with the attempt at cache-busting in line 10, seem to be attempting to load the script from cache no matter what the query string is. I see the alert showing the script source, but the callback never fires. If I remove the manifest link from line 2 and reset my browser--being Safari and the iPhone Simulator--to clear cache, it works every time. I've also tried alerting the number of SCRIPT tags in the page, and it's definitely seeing both the existing and dynamically-created tag in all cases.

    Read the article

  • How do I keep users from spoofing data through a form?

    - by Jonathan
    I have a site which has been running for some time now that uses a great deal of user input to build the site. Naturally there are dozens of forms on the site. When building the site, I often used hidden form fields to pass data back to the server so that I know which record to update. an example might be: <input type="hidden" name="id" value="132" /> <input type="text" name="total_price" value="15.02" /> When the form is submitted, these values get passed to the server and I update the records based on the data passed (i.e. the price of record 132 would get changed to 15.02). I recently found out that you can change the attributes and values via something as simple as firebug. So...I open firebug and change the id value to "155" and the price value to "0.00" and then submit the form. Viola! I view product number 155 on the site and it now says that it's $0.00. This concerns me. How can I know which record to update without either a query string (easily modified) or a hidden input element passing the id to the server? And if there's no better way (I've seen literally thousands of websites that pass the data this way), then how would I make it so that if a user changes these values, the data on the server side is not executed (or something similar to solve the issue)? I've thought about encrypting the id and then decrypting it on the other side, but that still doesn't protect me from someone changing it and just happening to get something that matches another id in the database. I've also thought about cookies, but I've heard that those can be manipulated as well. Any ideas? This seems like a HUGE security risk to me.

    Read the article

  • JavaScript window object element properties

    - by Timothy
    A coworker showed me the following code and asked me why it worked. <span id="myspan">Do you like my hat?</span> <script type="text/javascript"> var spanElement = document.getElementById("myspan"); alert("Here I am! " + spanElement.innerHTML + "\n" + myspan.innerHTML); </script> I explained that a property is attached to the window object with the name of the element's id when the browser parses the document which then contains a reference to the appropriate dom node. It's sort of as if window.myspan = document.getElementById("myspan") is called behind the scenes as the page is being rendered. The ensuing discussion we had raised a few of questions: The window object and most of the DOM are not part of the official JavaScript/ECMA standards, but is the above behavior documented in any other official literature, perhaps browser-related? The above works in a browser (at least the main contenders) because there is a window object, but fails in something like rhino. Is writing code that relys on this considered bad practice because it makes too many assumptions about the execution environment? Are there any browsers in which the above would fail, or is this considered standard behavior across the board? Does anyone here know the answers to those questions and would be willing to enlighten me? I tried a quick internet search, but I admit I'm not sure how to even properly phrase the query. Pointers to references and documentation are welcome.

    Read the article

  • Why would paperclip not assign an ID to my uploaded photos?

    - by Trip
    I just deployed to a cluster server, and my delayed_jobs recipe was overwritten in the process. I solved that, delayed_jobs is up and running but can't find the ID of images that are uploaded. The images are saved correctly : Processing PhotosController#create (for 173.161.167.41 at 2010-06-01 05:09:14) [POST] Parameters: {"Filename"="1.jpg", "gallery_id"="1298", "action"="create", "amp"=nil, "authenticity_token"="qmbnpwFY8a5E3YtS/4fMWF/Z8evCE4hMxqKVJw0I7Ek=", "Upload"="Submit Query", "controller"="photos", "organization_id"="470", "_hq_channel_session"="BAh7CSIYdXNlcl9jcmVkZW50aWFsc19pZGkHIhV1c2VyX2NyZWRlbnRpYWxzIgGAOGRlZDc0NGJlOWU3NTNlNDFlYmVlMDdjMzIzYjA1ZjQxNGE5ZDY4YjNmYjFmNjNkMDQ2OWY2ZDQyOTljZDhiMDFlNmRkMDljNThmMzBmOWJhMTIwNDhkMDI5MTMxYmU5MDczYjIxZmI4YmQxMDVlMTBmNjZmOWFhODE1ZTBjMGM6EF9jc3JmX3Rva2VuIjFxbWJucHdGWThhNUUzWXRTLzRmTVdGL1o4ZXZDRTRoTXhxS1ZKdzBJN0VrPToPc2Vzc2lvbl9pZCIlMjAwMDQ3ZDQ3ZWUyZTgzODIxYzdjOGI3OTdmZGJiMDM=--ac6aa580262938bf5a4d6b9a740722b680eb5d48", "Filedata"=#} [paperclip] Saving attachments. [paperclip] saving /data/HQ_Channel/releases/20100530153454/public/system/photos/9253/original/1.jpg [paperclip] Saving attachments. [paperclip] Saving attachments. Completed in 127ms (View: 2, DB: 91) | 200 OK [http://invent.hqchannel.com/organizations/470/media/galleries/1298/photos?_hq_channel_session=BAh7CSIYdXNlcl9jcmVkZW50aWFsc19pZGkHIhV1c2VyX2NyZWRlbnRpYWxzIgGAOGRlZDc0NGJlOWU3NTNlNDFlYmVlMDdjMzIzYjA1ZjQxNGE5ZDY4YjNmYjFmNjNkMDQ2OWY2ZDQyOTljZDhiMDFlNmRkMDljNThmMzBmOWJhMTIwNDhkMDI5MTMxYmU5MDczYjIxZmI4YmQxMDVlMTBmNjZmOWFhODE1ZTBjMGM6EF9jc3JmX3Rva2VuIjFxbWJucHdGWThhNUUzWXRTLzRmTVdGL1o4ZXZDRTRoTXhxS1ZKdzBJN0VrPToPc2Vzc2lvbl9pZCIlMjAwMDQ3ZDQ3ZWUyZTgzODIxYzdjOGI3OTdmZGJiMDM%3D--ac6aa580262938bf5a4d6b9a740722b680eb5d48&authenticity_token=qmbnpwFY8a5E3YtS%2F4fMWF%2FZ8evCE4hMxqKVJw0I7Ek%3D] And then delayed_jobs keeps spinning around in circles on this one : 2010-06-01T05:09:02-0700: * [Worker(delayed_job host:ip-10-251-197-159 pid:19994)] acquired lock on PhotoJob 2010-06-01T05:09:02-0700: * [JOB] delayed_job host:ip-10-251-197-159 pid:19994 failed with ActiveRecord::RecordNotFound: Couldn't find Photo with ID=9247 - 0 failed attempts 2010-06-01T05:09:02-0700: * [Worker(delayed_job host:ip-10-251-197-159 pid:19994)] acquired lock on PhotoJob 2010-06-01T05:09:02-0700: * [JOB] delayed_job host:ip-10-251-197-159 pid:19994 failed with ActiveRecord::RecordNotFound: Couldn't find Photo with ID=9245 - 0 failed attempts 2010-06-01T05:09:02-0700: * [Worker(delayed_job host:ip-10-251-197-159 pid:19994)] acquired lock on PhotoJob So what I get is that the photos are not being assigned ID's by paperclip. Anyone know where I could poke and pry from here? UPDATE: I created a clone application on a single server. And there are no problems. The images on the cluster do show up (occassionally). If I keep clicking on the folders that lead to photos, it will 50% of the time return a 404 with it not being able to find the photo, and the other half it will present the photo. So the problem has got to be with the server interaction between the ActiveRecord through multiple servers.

    Read the article

  • how to extract information from JSON using jQuery

    - by Richard Reddy
    Hi, I have a JSON response that is formatted from my C# WebMethod using the JavascriptSerializer Class. I currently get the following JSON back from my query: {"d":"[{\"Lat\":\"51.85036\",\"Long\":\"-8.48901\"},{\"Lat\":\"51.89857\",\"Long\":\"-8.47229\"}]"} I'm having an issue with my code below that I'm hoping someone might be able to shed some light on. I can't seem to get at the information out of the values returned to me. Ideally I would like to be able to read in the Lat and Long values for each row returned to me. Below is what I currently have: $.ajax({ type: "POST", url: "page.aspx/LoadWayPoints", data: "{'args': '" + $('numJourneys').val() + "'}", contentType: "application/json; charset=utf-8", dataType: "json", success: function (msg) { if (msg.d != '[]') { var lat = ""; var long = ""; $.each(msg.d, function () { lat = this['MapLatPosition']; long = this['MapLongPosition']; }); alert('lat =' + lat + ', long =' + long); } } }); I think the issue is something to do with how the JSON is formatted but I might be incorrect. Any help would be great. Thanks, Rich

    Read the article

< Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >