Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 528/1698 | < Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >

  • ERROR CHECKING !!

    - by moata_u
    am trying catch any error when run command in order to write an log file / report i was trying write this code : FUNCTION FOR VALIDATION function valid (){ if [ $? -eq 0 ]; then echo "$var1" ": status : OK" else echo "$var1" ": status : ERROR" fi COMMAND FUNCTION function save(){ sed -i "/:@/c connection.url=jdbc:oracle:thin:@$ip:1521:$dataBase" $search var1="adding database ip" valid $var1 sed -i "/connection.username/c connection.username=$name" #$search retval=$? var1="addning database SID" valid $var1 $retval } save OUTPUT adding database ip : status : OK sed: no input file i want out put in this way: adding database ip : status : OK sed: no input file : status : ERROR" (OR) adding database ip : status : OK addning database SID : status : ERROR" I was tried toooo much but not working with me :(((

    Read the article

  • Use LINQ to count the number of combinations existing in two lists

    - by Ben McCormack
    I'm trying to create a LINQ query (or queries) that count the total number of occurences of a combinations of items in one list that exist in a different list. For example, take the following lists: CartItems DiscountItems ========= ============= AAA AAA AAA BBB AAA BBB BBB CCC CCC DDD The result of the query operation should be 2 since I can find two combinations of AAA and BBB (from DiscountItems) within the contents of CartItems. My thinking in approaching the query is to join the lists together to shorten CartItems to only include items from DiscountItems. The solution would be to find the CartItem in the resulting query that occurs the least amount of times, thus indicating how many combinations of items exist in CartItems. How can this be done? Here's the query I already have, but it's not working. query results in an enumeration with 100 items, far more than I expected. Dim query = From cartItem In Cart.CartItems Group Join discountItem In DiscountGroup.DiscountItems On cartItem.SKU Equals discountItem.SKU Into Group Select SKU = cartItem.SKU, CartItems = Group Return query.Min(Function(x) x.CartItems.Sum(Function(y) y.Quantity))

    Read the article

  • Has anyone used an object database with a large amount of data?

    - by Jon Kruger
    Object databases like MongoDB and db4o are getting lots of pub lately. Everyone that plays with them seems to love it. I'm guessing that they are dealing with about 640K of data in their sample apps. Has anyone tried to use an object database with a large amount of data (say, 50GB or more)? Are you able to still execute complex queries against it (like from a search screen)? How does it compare to your usual relational database of choice? I'm just curious. I want to take the object database plunge, but I need to know if it'll work on something more than a sample app.

    Read the article

  • Do I have to use Stored Procedures to get query level security or can I still do this with Dynamic S

    - by Peter Smith
    I'm developing an application where I'm concerned about locking down access to the database. I know I can develop stored procedures (and with proper parameter checking) limit a database user to an exact set of queries to execute. It's imperative that no other queries other then the ones I created in the stored procedures be allowed to execute under that user. Ideally even if a hacker gained access to the database connection (which only accepts connections from certain computers) they would only be able to execute the predefined stored procedures. Must I choose stored procedures for this or can I use Dynamic Sql with these fine grain permissions?

    Read the article

  • What should be proper location for sqlit3 database file?

    - by Elliot Chen
    Hi, Everyone: I'm using a sqlite3 database to store app's data. Instead of building a database within program, I introduced an existing db file: 'abc.sqlite' into my project and put it under my 'Resources' folder. So, I think this db file should be inside of 'bundle', so at my init function, I used following statement to read it out: NSString *path = [[NSBundle mainBundle] pathForResource:@"abc" ofType:"sqlite"]; if(sqlite3_open([path UTF8String], &database) != SQLITE_OK) ... It's ok that this db can be opened and data can be retrieved from it. BUT, someone told me that it's better to copy this db file into user folder: such as 'Document'. So, my question is: is it ok to use this db from main bundle directly or copy it to user folder then use that copy. Which is better? Thank you very much!

    Read the article

  • Strategies for avoiding SQL in your Controllers... or how many methods should I have in my Models?

    - by Keith Palmer
    So a situation I run into reasonably often is one where my models start to either: Grow into monsters with tons and tons of methods OR Allow you to pass pieces of SQL to them, so that they are flexible enough to not require a million different methods For example, say we have a "widget" model. We start with some basic methods: get($id) insert($record) update($id, $record) delete($id) getList() // get a list of Widgets That's all fine and dandy, but then we need some reporting: listCreatedBetween($start_date, $end_date) listPurchasedBetween($start_date, $end_date) listOfPending() And then the reporting starts to get complex: listPendingCreatedBetween($start_date, $end_date) listForCustomer($customer_id) listPendingCreatedBetweenForCustomer($customer_id, $start_date, $end_date) You can see where this is growing... eventually we have so many specific query requirements that I either need to implement tons and tons of methods, or some sort of "query" object that I can pass to a single -query(query $query) method... ... or just bite the bullet, and start doing something like this: list = MyModel-query(" start_date X AND end_date < Y AND pending = 1 AND customer_id = Z ") There's a certain appeal to just having one method like that instead of 50 million other more specific methods... but it feels "wrong" sometimes to stuff a pile of what's basically SQL into the controller. Is there a "right" way to handle situations like this? Does it seem acceptable to be stuffing queries like that into a generic -query() method? Are there better strategies?

    Read the article

  • How to make django test framework read from live database?

    - by lfborjas
    I realize there's a similar question here, but this one has a different approach: I have a django app that does queries over data indexed with djapian ; I'd like to write unit tests for this app's search component, and, obviously, I'd need the django settings module and all connections with the database active, so the test runner that django provides seems ideal. however, the django testing framework creates a dummy database and I'd hate to dump all my data to a fixture and then index it (the tests would take forever!); My data isn't at risk because the tests would only read from the database, so, how could this be achieved? -I'm new at this whole unit testing thing, so the solution of writing a new test runner I read in that similar question doesn't enlighten me a bit, at least not without some details

    Read the article

  • How do you create a writable copy of a sqlite database that is included in your iPhone Xcode project

    - by Iggy
    copy it over to your documents directory when the app starts. Assuming that your sqlite db is in your project, you can use the below code to copy your database to your documents directory. We are also assuming that your database is called mydb.sqlite. //copy the database to documents NSFileManager *fileManager = [NSFileManager defaultManager]; NSString *documentsDirectory = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]; NSString *path = [documentsDirectory stringByAppendingPathComponent:@"mydb.sqlite"]; if(![fileManager fileExistsAtPath:path]) { NSData *data = [NSData dataWithContentsOfFile:[[[NSBundle mainBundle] resourcePath] stringByAppendingString:@"/mydb.sqlite"]]; [data writeToFile:path atomically:YES]; } Anyone know of a better way to do this. This seems to work ...

    Read the article

  • How to use NSObject subclass?

    - by Jon
    So I've created a subclass of NSObject called Query @interface Query : NSObject @property (nonatomic, assign) NSNumber *weight; @property (nonatomic, assign) NSNumber *bodyFat; @property (nonatomic, assign) NSNumber *activityLevel; @end Is this correct for setting the object's property? In VC1: BodyFatViewController *aViewController = [[BodyFatViewController alloc]init]; aViewController.query = self.query; [self.navigationController pushViewController:aViewController animated:YES]; In VC2: - (void)pickerView:(UIPickerView *)pickerView didSelectRow:(NSInteger)row inComponent:(NSInteger)component { Query *anQuery = [[Query alloc]init]; anQuery.bodyFat = [self.bodyFatArray objectAtIndex:row]; anQuery.weight = self.query.weight; self.query = anQuery; }

    Read the article

  • Why the Select is before the From in a SQL Query?

    - by Scorpi0
    This is something that bothered me a lot at school. 5 years ago, when I learned SQL, I always wondered why we specify first the fields we want and then where we want them from. According to my idea, we should write: From Employee e Select e.Name So why the norm says: Select e.Name -- Eeeeek, what e means ? From Employee e -- Ok, now I know what e is It took me weeks to understand SQL, and I know that a lot of that time was consumed by the wrong order of elements. It is like writing in C#: string name = employee.Name; var employee = this.GetEmployee(); So, I assume that it has a historical reason, anyone knows why?

    Read the article

  • Database Management: Metadata is more important than you think!

    Whether it&#146;s data warehousing, MDM or business intelligence, metadata is added to the project plan, is downgraded and eventually dropped from the project plan. The impacts of not including metadata and metadata management as part of the project have far-reaching and costly repercussions throughout the organization. Read on to learn more...

    Read the article

  • What open source document-oriented database system is most mature for Windows usage?

    - by jdk
    After using relational databases as back-end storage all my Windows programming life (currently .NET), I want to experiment with a document-oriented database by this Wikipedia definition; it can be standalone or layered over an existing non-commercial database system. What open source document-oriented database solution would you recommend from your own experience and why? A nice to have would be a .NET provider. Admittedly this is somewhat subjective and potentially argumentative so keep it real folks and I'll do the same - also your answers will be invaluable to others looking into document-oriented databases for the first time on Windows. I'm sure the overall value of your answers will outweigh any biases. Thanks.

    Read the article

  • What is the preferred tool/approach to putting a SQL Server database under source control?

    - by msigman
    I've evaluated RedGate SQL Source Control tool (http://www.red-gate.com/products/sql-development/sql-source-control/), and I believe that Team Foundation Server 2010 offers a way to do this as well (as touched on here http://blog.discountasp.net/using-team-foundation-server-2010-source-control-from-sql-server-management-studio/). Are there alternatives, or is one of these considered the preferred/standard solution?

    Read the article

  • json null error help in php

    - by bobby
    I get 'json is null' as error My php file: <?php if (isset($_REQUEST['query'])) { $query = $_REQUEST['query']; $url='https://www.googleapis.com/urlshortener/v1/'; $key='ApiKey'; $result= $url.($query).$key; $ch = curl_init($result); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,1); $resp = curl_exec($ch); curl_close($ch); echo $resp; } ?> My html: <html> <head> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <script type="text/javascript"> $(document).ready(function(){ // when the user clicks the button $("button").click(function(){ $.getJSON("shortner.php?query="+$('#query').attr("value"),function(json){ $('#results').append('<p>Id : ' + json.id+ '</p>'); $('#results').append('<p>Longurl: ' + json.longurl+ '</p>'); }); }); }); </script> </head> <body> <input type="text" value="Enter a place" id="query" /><button>Get Coordinates</button> <div id="results"></div> Edited : <?php if (isset($_REQUEST['query'])) { $query = $_REQUEST['query']; $url='https://www.googleapis.com/urlshortener/v1/'; $key='Api'; $key2='?key='; $result= $url.$query.$key2.$key; $requestData= json_encode($result); echo var_dump($query); $ch = curl_init($requestData); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,1); $resp = curl_exec($ch); curl_close($ch); echo $resp; } ?>

    Read the article

  • How do I reload a Roo project without clearing the database?

    - by Omniwombat
    I've been learning how to build projects using Roo and am making good progress. I have the nucleus of a project which correctly displays my defined entities and allows me to create, edit, and delete the representative objects. I am using mysql at the database and I see that objects entered using the UI correctly appear in the mysql database. Per the Roo instructions, I am starting the webapp using "mvn tomcat:run". Unfortunately, I've discovered that whenever I restart Tomcat using Maven, it clears all of the existing objects out of the database. I'm left with empty tables. It seems to do this as the final step just prior to Tomcat stating that the server has started. I know this is just me being a n00b, but searches haven't been very helpful, none of the project's XML files seem relevant,

    Read the article

  • How to use database to generate multiple folder content page? [migrated]

    - by VenomVipes
    Scenario :I am trying to build a Mobile Entertainment Portal. It will enable users to download Music & Movies to their Cell Phones... Problem Exp : Suppose I upload 100 folders of Songs, each folder is for one Album. I want a way to generate a page with all the folders name (Album Name) in it. If user click on the page, they should be taken to a page where they get list of all songs in the album. Clicking on any song name will let them download it. Can it be done anyway or will I have to manually design each of the 3 pages for each album. If I do that, its time consuming and also will be difficult to change anything like footer, header...

    Read the article

  • Is it possible to load an entire SQL Server CE database into RAM?

    - by DanM
    I'm using LinqToSql to query a small SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table via a foreign key, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of the appeal of LinqToSql. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view.

    Read the article

  • Does anyone else think instance variables are problematic in database-backed applications?

    - by Ben Aston
    It occurs to me that state control in languages like C# is not well supported. By this, I mean, it is left upto the programmer to manage the state of in-memory objects. A common use-case is that instance variables in the domain-model are copies of information residing in persistent storage (i.e. the database). Clearly this violates the single point of authority principle, and "synchronisation" has to be managed by the developer. I envisage a system where instead of instance variables, we have simple public access/mutator methods marked with attributes that link them to the database, and where reads and writes are mediated by a framework that decides whether to hit the database. Does such a system exist? Am I completely missing the point, or is there some truth to this idea?

    Read the article

  • powershell missing member methods in array

    - by Andrew
    Hi Guys I have (yet another) powershell query. I have an array in powershell which i need to use the remove() and split commands on. Normally you set an array (or variable) and the above methods exist. On the below $csv2 array both methods are missing, i have checked using the get-member cmd. How can i go about using remove to get rid of lines with nan. Also how do i split the columns into two different variables. at the moment each element of the array displays one line, for each line i need to convert it into two variables, one for each column. timestamp Utilization --------- ----------- 1276505880 2.0763250000e+00 1276505890 1.7487730000e+00 1276505900 1.6906890000e+00 1276505910 1.7972880000e+00 1276505920 1.8141900000e+00 1276505930 nan 1276505940 nan 1276505950 0.0000000000e+00 $SystemStats = (Get-F5.iControl).SystemStatistics $report = "c:\snmp\data" + $gObj + ".csv" ### Allocate a new Query Object and add the inputs needed $Query = New-Object -TypeName iControl.SystemStatisticsPerformanceStatisticQuery $Query.object_name = $i $Query.start_time = $startTime $Query.end_time = 0 $Query.interval = $interval $Query.maximum_rows = 0 ### Make method call passing in an array of size one with the specified query $ReportData = $SystemStats.get_performance_graph_csv_statistics( (,$Query) ) ### Allocate a new encoder and turn the byte array into a string $ASCII = New-Object -TypeName System.Text.ASCIIEncoding $csvdata = $ASCII.GetString($ReportData[0].statistic_data) $csv2 = convertFrom-CSV $csvdata $csv2

    Read the article

  • Sql Server Compact Edition 3.5: Does the data persist in the database file?

    - by jerbersoft
    Hi guys, I have this question in which I have a SQL Server Compact Edition database for a desktop application. I am completely new to Sql Server Compact Edition. The question is, does the data inserted in the database persist even if the application is shut down or restarted? Coz I cant seem to find my data when using Sql Server Management Studio to manage the database. Am I missing anything/something? EDIT: Is SQL Server Compact Edition used for caching local data only? We cant use it like what we normally do on Sql Server Express for example managing data using Sql Server Management Studio?

    Read the article

  • How to get nicer error-messages in this bash-script?

    - by moata_u
    I'm trying to catch any error when run a command in order to write a log-file / report I've tried this code: function valid (){ if [ $? -eq 0 ]; then echo "$var1" ": status : OK" else echo "$var1" ": status : ERROR" fi } function save(){ sed -i "/:@/c connection.url=jdbc:oracle:thin:@$ip:1521:$dataBase" $search var1="adding database ip" valid $var1 sed -i "/connection.username/c connection.username=$name" #$search var1="addning database SID" valid $var1 } save The output looks like this: adding database ip : status : OK sed: no input file But I want it to look like this: adding database ip : status : OK sed: no input file : status : ERROR" or this: adding database ip : status : OK addning database SID : status : ERROR" I've been trying, but it's not working with me. :(

    Read the article

  • What steps should be taken to ensure that an open source database gets ready for production?

    - by I_like_traffic_lights
    I am considering using GridSQL in a production environment. However, I do have some indications that it is not ready. One is that it got excluded by the offering of EnterpriseDB a while ago, and the forums seem to report a few wrong results and relatively severe bugs. The alternatives to GridSQL, however cost around 100.000$ to buy, so I was thinking to utilize some of this money to ensure that GridSQL gets ready for production. At the same time, I could risk spending 50.000$ and months of work on the development of GridSQL, just to discover that the design was flawed and that a complete rewrite is needed. Then I would have to buy the commercial alternatives to GridSQL and the existence of my startup would be at risk. Question What steps would you take to ensure that there is as little risk as possible that the worst case scenario described above would happen? It is unrealistic that I could do much testing nor code review/coding myself (I am also not the best developer), so please describe where to find the guys that would need to do the work.

    Read the article

  • Filtering with joined tables

    - by viraptor
    I'm trying to get some query performance improved, but the generated query does not look the way I expect it to. The results are retrieved using: query = session.query(SomeModel). options(joinedload_all('foo.bar')). options(joinedload_all('foo.baz')). options(joinedload('quux.other')) What I want to do is filter on the table joined via 'first', but this way doesn't work: query = query.filter(FooModel.address == '1.2.3.4') It results in a clause like this attached to the query: WHERE foos.address = '1.2.3.4' Which doesn't do the filtering in a proper way, since the generated joins attach tables foos_1 and foos_2. If I try that query manually but change the filtering clause to: WHERE foos_1.address = '1.2.3.4' AND foos_2.address = '1.2.3.4' It works fine. The question is of course - how can I achieve this with sqlalchemy itself?

    Read the article

  • Java Persistence: Cast to something the result of Query.getResultList() ?

    - by GuiSim
    Hey everyone, I'm new to persistence / hibernate and I need your help. Here's the situation. I have a table that contains some stuff. Let's call them Persons. I'd like to get all the entries from the database that are in that table. I have a Person class that is a simple POJO with a property for each column in the table (name, age,..) Here's what I have : Query lQuery = myEntityManager.createQuery("from Person") List<Person> personList = lQuery.getResultList(); However, I get a warning saying that this is an unchecked conversion from List to List<Person> I thought that simply changing the code to Query lQuery = myEntityManager.createQuery("from Person") List<Person> personList = (List<Person>)lQuery.getResultList(); would work.. but it doesn't. Is there a way to do this ? Does persistence allow me to set the return type of the query ? (Through generics maybe ? )

    Read the article

  • Why does SQL Server 2000 treat SELECT test.* and SELECT t.est.* the same?

    - by Chris Pebble
    I butter-fingered a query in SQL Server 2000 and added a period in the middle of the table name: SELECT t.est.* FROM test Instead of: SELECT test.* FROM test And the query still executed perfectly. Even SELECT t.e.st.* FROM test executes without issue. I've tried the same query in SQL Server 2008 where the query fails (error: the column prefix does not match with a table name or alias used in the query). For reasons of pure curiosity I have been trying to figure out how SQL Server 2000 handles the table names in a way that would allow the butter-fingered query to run, but I haven't had much luck so far. Any sql gurus know why SQL Server 2000 ran the query without issue? Update: The query appears to work regardless of the interface used (e.g. Enterprise Manager, SSMS, OSQL) and as Jhonny pointed out below it bizarrely even works when you try: SELECT TOP 1000 dbota.ble.* FROM dbo.table

    Read the article

< Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >