Search Results

Search found 17627 results on 706 pages for 'hierarchical query'.

Page 553/706 | < Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >

  • Wunderground and UTC Offset

    - by Brandon
    I am consuming the international weather forecasts via Wunderground's XML API: http://wiki.wunderground.com/index.php/API_-_XML Looking at an output for Kabul, Afghanistan for instance: http://api.wunderground.com/auto/wui/geo/ForecastXML/index.xml?query=OAKB I notice that there is no UTC offset. The closest that I can see is this: <tz_short>AFT</tz_short> Which identifies the current TimeZone is AFT. The problem I see is that there is no universally accepted time zone abbreviations, so I cannot take these abbreviations and look up and offset from C#'s TimeZoneInfo objects. Is there a listing of Wunderground's Time Zones abbreviations/names/offsets so I can map their Time Zones to the TimeZoneInfo objects, or is there a better way to get this information? I will need to use the TimeZoneInfo so I can calculate daylight savings time for different locations internationally.

    Read the article

  • Index on column with only 2 distinct values

    - by Will
    I am wondering about the performance of this index: I have an "Invalid" varchar(1) column that has 2 values: NULL or 'Y' I have an index on (invalid), as well as (invalid, last_validated) Last_validated is a datetime (this is used for a unrelated SELECT query) I am flagging a small amount of items (1-5%) of rows in the table with this as 'to be deleted'. This is so when i DELETE FROM items WHERE invalid='Y' it does not perform a full table scan for the invalid items. A problem seems to be, the actual DELETE is quite slow now, possibly because all the indexes are being removed as they are deleted. Would a bitmap index provide better performance for this? or perhaps no index at all?

    Read the article

  • How to implement dynamic queries for MNesia?

    - by Kerem
    I'm trying to implement a function that generates dynamic queries for MNesia. For example, when function is called with these arguments; dyn_query(list, person, [name, age], ["jack", 21]) I want to query MNesia to list items whose name is "jack" and age is 21 in person table. I've tried to implement this by using qlc:q(ListComprehension) and qlc:string_to_handle("ListComprehension"). First failed because of compile errors, compiler didn't let me to use functions instead of ListComprehentions and variables instead of record names like "Item#Table.Member". Second failed, because erl_eval couldn't handle records and throwed exceptions like {undefined_record, person}. Which method should I use? How could i solve these problems? Or should I use a different method? Thanks.

    Read the article

  • better way to write this

    - by ash34
    Hi, I have to create a hash of the form h[:bill] = ["Billy", "NA", 20, "PROJ_A"] by login where 20 is the cumulative number of hours reported by the login for all task transactions returned by the query where each login has multiple reported transactions. Did I do this in a bad way or this seems alright. h = Hash.new Task.find_each(:include => [:user], :joins => :user, :conditions => ["from_date >= ? AND from_date <= ? AND category = ?", Date.today - 30, Date.today + 30, 'PROJ1']) do |t| h[t.login.intern] = [t.user.name, 'NA', h[t.login.intern].nil? ? (t.hrs_per_day * t.num_days) : h[t.login.intern][2] + (t.hrs_day * t.workdays), t.category] end Also if I have to aggregate not just by login but login and category how do I accomplish this? thanks, ash

    Read the article

  • How to get the place name by latitude and longitude using openstreetmap in android

    - by Gaurav kumar
    In my app i am using osm rather than google map.I have latitude and longitude.So from here how i will query to get the city name from osm database..please help me. final String requestString = "http://nominatim.openstreetmap.org/reverse?format=json&lat=" + Double.toString(lat) + "&lon=" + Double.toString(lon) + "&zoom=18&addressdetails=1"; RequestBuilder builder = new RequestBuilder(RequestBuilder.GET, URL.encode(requestString)); try { @SuppressWarnings("unused") Request request = builder.sendRequest(null, new RequestCallback() { @Override public void onResponseReceived(Request request, Response response) { if (response.getStatusCode() == 200) { String city = ""; try { JSONValue json = JSONParser.parseStrict(response); JSONObject address = json.isObject().get("address").isObject(); final String quotes = "^\"|\"$"; if (address.get("city") != null) { city = address.get("city").toString().replaceAll(quotes, ""); } else if (address.get("village") != null) { city = address.get("village").toString().replaceAll(quotes, ""); } } catch (Exception e) { } } } }); } catch (Exception e1) { }

    Read the article

  • How active record is automatically setting up the autoincremental value of ID?

    - by piemesons
    Here this is what we are doing:-- an_order = Order.new an_order.name = "Dave Thomas" an_order.email = "[email protected]" an_order.address = "123 Main St" an_order.pay_type = "check" an_order.save Now it will insert a new row in orders table right. I m having a ID column in that table. Now suppose after inserting this record the value of the ID corresponding to this record is 100. Now how this 100 value came to know? Does the previous value of ID ie 99 (Suppose) is stored somewhere in some variable/file in the ORM layer or before inserting this value a query was fired to know about the last inserted value or database is itself doing that or anything else???

    Read the article

  • SQL Server, Remote Stored Procedure, and DTC Transactions

    - by marc
    Our organization has a lot of its essential data in a mainframe Adabas database. We have ODBC access to this data and from C# have queried/updated it successfully using ODBC/Natural "stored procedures". What we'd like to be able to do now is to query a mainframe table from within SQL Server 2005 stored procs, dump the results into a table variable, massage it, and join the result with native SQL data as a result set. The execution of the Natural proc from SQL works fine when we're just selecting it; however, when we insert the result into a table variable SQL seems to be starting a distributed transaction that in turn seems to be wreaking havoc with our connections. Given that we're not performing updates, is it possible to turn off this DTC-escalation behavior? Any tips on getting DTC set up properly to talk to DataDirect's (formerly Neon Systems) Shadow ODBC driver?

    Read the article

  • HSM - cryptoki - opening sessions overhead

    - by Raj
    I am having a query regarding sessions with HSM. I am aware that there is an overhead if you initialise and finalise the cryptoki api for every file you want to encrypt/decrypt. My queries are, Is there an overhead in opening and closing individual sessions for every file, you want to encrypt/decrypt.(C_Initialize/C_Finalize) How many maximum number of sessions can i have for a HSM simultaneously, with out affecting the performance? Is opening and closing the session for processing individual files the best approach or opening a session and processing multiple files and then closing the session the best approach? Thanks

    Read the article

  • How to deny payment via PayPal IPN?

    - by Nick
    Hello all, I need to create dynamic 'Pay Now' buttons on my site, and PayPal says the way to do this is via an HTML FORM with preset variables for the price, currency, and item of the purchase. I use PayPal IPN to notify me when a payment has complete. However, what's to stop someone from modifying the query parameters of the Pay Now button to change the price? Some people have told me to redirect the button through a PHP file that sends you to a PayPal payment page with the parameters in place, but the price could just as easily be manipulated in the Web browser's address bar. My question is, how can I deny a payment if the information I receive from PayPal's IPN service is invalid (if the price doesn't match our records)? I'm quite confused and couldn't find any documentation on what I'm looking for. Hopefully, you guys can help. Thanks!

    Read the article

  • schema for storing different varchar fields over time?

    - by Henry
    This app I'm working on needs to store some meta data fields about an entity. The problem is that we can already foresee that these fields are going to change a lot in the future. I'm using Hibernate (in ColdFusion) and each entity's properties are translated to one column in the entity table, but altering table columns later down the raod will be costly and error-prone right? Should I go for something like this? MetaDataField ----- metaDataFieldID (PK), name FieldValue ---------- EntityID (PK, FK), metaDataFieldID (PK, FK), value [varchar(255)] Is this common? Anything to watch out for? p.s. I also thought of using XML on SQL Server 05+. After talking to some ppl, seems like it is not a viable solution 'cause it will be too slow for doing certain query for reporting purposes.

    Read the article

  • PostgreSQL - best way to return an array of key-value pairs

    - by Matt W
    I'm trying to select a number of fields, one of which needs to be an array with each element of the array containing two values. Each array item needs to contain a name (character varying) and an ID (numeric). I know how to return an array of single values (using the ARRAY keyword) but I'm unsure of how to return an array of an object which in itself contains two values. The query is something like SELECT t.field1, t.field2, ARRAY(--with each element containing two values i.e. {'TheName', 1 }) FROM MyTable t I read that one way to do this is by selecting the values into a type and then creating an array of that type. Problem is, the rest of the function is already returning a type (which means I would then have nested types - is that OK? If so, how would you read this data back in application code - i.e. with a .Net data provider like NPGSQL?) Any help is much appreciated.

    Read the article

  • NullReferenceException in EntityFramework, how come?

    - by Mickel
    Take a look at this query: var user = GetUser(userId); var sessionInvites = ctx.SessionInvites .Include("InvitingUser") .Include("InvitedUser") .Where(e => e.InvitedUser.UserId == user.UserId) .ToList(); var invites = sessionInvites; // Commenting out the two lines below, and it works as expected. foreach (var invite in sessionInvites) ctx.DeleteObject(invite); ctx.SaveChanges(); return invites; Now, everything here executes without any errors. The invites that exists for the user are being deleted and the invites are being returned with success. However, when I then try to navigate to either InvitingUser or InvitedUser on any of the returned invites, I get NullReferenceException. All other properties of the SessionIvites returned, works fine. How come? [EDIT] Now the weird thing is, if I comment out the lines with delete it works as expected. (Except that the entities will not get deleted :S)

    Read the article

  • how do I join and include the association

    - by Mark
    Hi All, How do I use both include and join in a named scope? Post is polymorphic class Post has_many :approved_comments, :class_name => 'Comment' end class Comment belongs_to :post end Comment.find(:all, :joins => :post, :conditions => ["post.approved = ? ", true], :include => :post) This does not work as joins does an inner join, and include does a left out join. The database throws an error as both joins can't be there in same query.

    Read the article

  • SQL Server 2005 Fail: Return Dates As Strings

    - by Abs
    Hello all, I am using the SQL Server PHP Driver, I think this question can be answered without knowing what this is. I have come across this many times, what does it mean by NAMES? Column names?: SET NAMES utf8 Is there a query similar to the above that will get my dates to be returned as a string? For some reason on my SQL Sever 2008 on Vista, this works: $connectionInfo = array('Database' => $dbname, 'ReturnDatesAsStrings' => true) But the above 'ReturnDatesAsStrings' does not work on my SQL Server 2005 on a windows server machine? I can't execute any queries after setting the above! Does SQL Server 2005 support ReturnDatesAsStrings? Is there some other parameter I can pass to do the same? Thanks all for any help EDIT I should of mentioned this but if there is a solution I am hoping for one that is in the form of a setting that can be set before any queries can be executed as I do not have control on what queries will be executed.

    Read the article

  • Lookahead regex produces unexpected group

    - by Ivan Yatskevich
    I'm trying to extract a page name and query string from a URL which should not contain .html Here is an example code in Java: public class TestRegex { public static void main(String[] args) { Pattern pattern = Pattern.compile("/test/(((?!\\.html).)+)\\?(.+)"); Matcher matcher = pattern.matcher("/test/page?param=value"); System.out.println(matcher.matches()); System.out.println(matcher.group(1)); System.out.println(matcher.group(2)); } } By running this code one can get the following output: true page e What's wrong with my regex so the second group contains the letter e instead of param=value?

    Read the article

  • .getScript getting the redirect url of javascript

    - by user177883
    I d like to execute a remote javascript which redirects the user to another page on my domain with data that s passes as query string. I want to get this data which is passed on to the page on my domain. $.getScript('http://site.com/foo.js', function() { //foo.js redirects to another page on my domain with data // and i d like to capture that data from this function, // at least if i find the parameters that passed on there, i ll be fine. }); What to do ? http://api.jquery.com/jQuery.getScript/

    Read the article

  • Cannot implicitly convert type System.Collection.Generic.IEnumberable

    - by Cen
    I'm receiving this error in my Linq statement --- Cannot implicitly convert type 'System.Collections.Generic.IEnumerable' to 'hcgames.ObjectClasses.ShoppingCart.ShoppingCartCartAddon'. An explicit conversion exists (are you missing a cast?) From this query ShoppingCartItems items = Cart.GetAllItems(); ShoppingCartCartAddons addons = Cart.GetAllAddons(); var stuff = from x in items select new ShoppingCartItem() { ProductID = x.ProductID, Quantity = x.Quantity, Name = x.Name, Price = x.Price, Weight = x.Weight, Addons = (from y in addons where y.ShoppingCartItemID == x.ID select y) }; I can not figure out how to cast this properly. Any suggestions? Thanks for your help!

    Read the article

  • Alternative databases to use when putting IIS Logs into a database using LogParser

    - by Robin Day
    We have run some scripts that use LogParser to dump our IIS logs into a SQL Server database. We can then query this to get simple stats on hits, usage etc. It's also good when linking it to error log databases and performance counter database to compare usage with errors, etc. Having implemented this for just one system and for the last 2-3 weeks we already have a 5GB database with around 10 million records. This is making any queries to this database quite slow and will no doubt cause storage issues if we continue to log as we are. Can anyone suggest any alternative databases that we could use for this data that would be more efficient for such logs? I'd be particularly interested in any experience of Google's BigTable or Amazon's SimbleDB. Are either of these suitable for reporting queries? COUNTs, GROUP BYs, PIVOTs?

    Read the article

  • Can't enumerate LinQ results with left join

    - by nvtthang
    var itemSet = from item in da.GetList<Models.account>() join file in objFileStorageList on item.account_id equals file.parent_id into objFile from fileItem in objFile.DefaultIfEmpty() where item.company != null && item.company.company_id == 123 orderby item.updatedDate descending select new { Id = item.account_id, RefNo = item.refNo, StartDate = item.StartDate , EndDate = item.EndDate , Comment = item.comment, FileStorageID = fileItem != null ? fileItem.fileStorage_id : -1, Identification = fileItem != null ? fileItem.identifier : null, fileName = fileItem != null ? fileItem.file_nm : null }; It raises error message when I try to enumerate through collection result from Linq query above. LINQ to Entities does not recognize the method 'System.Collections.Generic.IEnumerable1[SCEFramework.Models.fileStorage] DefaultIfEmpty[fileStorage](System.Collections.Generic.IEnumerable1[SCEFramework.Models.fileStorage])' method, and this method cannot be translated into a store expression foreach (var item in itemSet) { string itemRef= item.RefNo; } Please suggest me any solutions. Thanks in advance.

    Read the article

  • What is .htaccess RewriteRule best practice?

    - by Pablo
    Is it better to have a single RewriteRule with a bunch of RegEx or multiples Rules with fewer RegEx for the server to query? Will there be any performance differences? Heres is an example a single rule with almost all RegEx groups as optional: RewriteRule ^gallery/?([\w]+)?/?([\w]+)?/?([\d]+)?/?([\w]+)/?$ /gallery.php?$1=$2&start=$3&by=$4 [NC] Here are some of the rules lists that would replace the one above: RewriteRule ^gallery/category/([\w]+)/$ /gallery.php?category=$1& [NC] RewriteRule ^gallery/category/([\w]+)/([\d]+)/$ /gallery.php?category=$1&start=$2 [NC] RewriteRule ^gallery/category/([\w]+)/([\d]+)/([\w]+)/$ /gallery.php?category=$1&start=$2&by=$3 [NC] ... RewriteRule ^gallery/tag/([\w]+)/$ /gallery.php?category=$1& [NC] RewriteRule ^gallery/tag/([\w]+)/([\d]+)/$ /gallery.php?category=$1&start=$2 [NC] RewriteRule ^gallery/tag/([\w]+)/([\d]+)/([\w]+)/$ /gallery.php?category=$1&start=$2&by=$3 [NC] ... I'll be glad to hear your options or personal experiences.

    Read the article

  • PL/SQL Package invalidated

    - by FrustratedWithFormsDesigner
    I have a script that makes use of a package (PKG_MY_PACKAGE). I will change some of the fields in a query in that package and then recompile it (I don't change or compile any other packages). I run the script and I get an error that looks like ORA-04068: existing state of packages has been discarded ORA-04061: existing state of package body "USER3.PKG_MY_PACKAGE" has been invalidated ORA-04065: not executed, altered or dropped package body "USER3.PKG_MY_PACKAGE" ORA-06508: PL/SQL: could not find program unit being called: "USER3.PKG_MY_PACKAGE" ORA-06512: at line 34 I run the script again (without changing anything else in the system) and the script executes successfully. I thought that when I compiled before I executed the script that would fix any invalid references. This is 100% reproducible, and the more I test this script the more annoying it gets. What could cause this, and what would fix it? (oracle 10g, using PL/SQL Developer 7)

    Read the article

  • Keeping a large volume of data in Session - Suggestions / alternatives?

    - by Fishcake
    I'm developing a web app for which the client wants us to query their data as little as possible. The data will be coming from a Microsoft CRM instance. So we've agreed that data will only be queried as and when it is needed, therefore if a web user wants to see a list of contacts (for example) that list is fetched into a local DataTable. Then if a new contact is created on the website the new contact is sent to CRM and added to the local DataTable at the same time. Likewise for edits. If the user then looks at their contacts again the data will just come from the local DataTable. At the moment local data is being kept in Session but my concern is that too much memory will start being used up. However traffic is expected to be pretty small, perhaps no more than 20 concurrent users so am I worrying about nothing or is there a better way you can suggest to handle this?

    Read the article

  • Find all clauses related to an atom

    - by Luc Touraille
    This is probably a very silly question (I just started learning Prolog a few hours ago), but is it possible to find all the clauses related to an atom? For example, assuming the following knowledge base: cat(tom). animal(X) :- cat(X). , is there a way to obtain every possible information about tom (or at least all the facts that are explicitly stated in the base)? I understand that a query like this is not possible: ?- Pred(tom). so I thought I could write a rule that would deduce the correct information: meta(Object, Predicate) :- Goal =.. [Predicate, Object], call(Goal). so that I could write queries such as ?- meta(tom, Predicate). but this does not work because arguments to call are not sufficiently instantiated. So basically my question is: is this at all possible, or is Prolog not design to provide this kind of information? And if it is not possible, why?

    Read the article

  • Get data types from arbitrary sql statement in SQL Server 2008

    - by Christopherous 5000
    Given some arbitrary SQL I would like to get the data types of the returned columns. The statement might join many tables, views, TVFs, etc. I know I could create a view based on the query and get the datatypes from that, hoping there's a quicker way. Only think I've been able to think of is writing a .net utility to run the SQL and examine the results, wondering if there is a TSQL answer. i.e. Given (not real tables just an example) SELECT p.Name AS PersonName, p.Age, a.Account as AccountName FROM Person as p LEFT JOIN Account as a ON p.Id = a.OwnerId I would like to have something like PersonName: (nvarchar(255), not null) Age: (smallInt, not null) etc...

    Read the article

  • In django models, how to make all table names not have the app label?

    - by Luigi
    I have a database that was already being used by other applications before i began writing a web interface with django for it. The table names follow simple naming standards, so the django model Customer should map to the table "customer" in the db. At the same time I'm adding new tables/models. Since I find it cumbersome to use app_customer every time i have to write a query (django's ORM is definitely not enough for them) in the other applications and I don't want to rename the existing tables, what is the best way to make all models in my django app use tables without applabel_, besides adding a Meta class with db_table= to each model? Is there any reason why I shouldn't do this? I have only one web app that needs to access this db, everything else doesn't use django models.

    Read the article

< Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >