Search Results

Search found 10177 results on 408 pages for 'thumbs db'.

Page 272/408 | < Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >

  • How to check if datetime is older than 20 seconds.

    - by Jelle
    Hello! This is my first time here so I hope I post this question at the right place. :) I need to build flood control for my script but I'm not good at all this datetime to time conversions with UTC and stuff. I hope you can help me out. I'm using the Google App Engine with Python. I've got a datetimeproperty at the DataStore database which should be checked if it's older than 20 seconds, then proceed. Could anybody help me out? So in semi-psuedo: q = db.GqlQuery("SELECT * FROM Kudo WHERE fromuser = :1", user) lastplus = q.get() if lastplus.date is older than 20 seconds: print"Go!"

    Read the article

  • The CHOICE : Firebird or H2

    - by blow
    Hi, i have to choice a database to use in server-mode for a java desktop application. I think both are great java database. In my opinion (im NOT well-informed): H2 PRO Is java based Develeopment say it is very very fast Easy to install, configure and use with java application H2 CONS Is a young project Reliability doubt for commercial porpouse FireBird PRO Rock solid project Well documented Should be fast and well optimized for large data Has a java driver... FireBird CONS It is not java based ... ? So, i can't choice between this great db, can i have a suggestion? Thank.

    Read the article

  • Django many to many annotations and filters

    - by dl8
    So I have two models, Person and Film where they're in a many to many relationship. My goal is to grab a film, and output the persons that have also appeared in at least 10 films. For example I can get the count individually by: >>> Person.objects.get(short__istartswith = "Matt Damon").film_set.count() 71 However, if I try to filter all the actors of a particular film out: >>> Film.objects.get(name__istartswith="Saving Private Ryan").actors.all().annotate(film_count=Count('film')).filter(film_count__gte=10) [] it returns an empty set since if I manually look at everyone's film_count it's 1, even though an actor such as Matt Damon (as seen above) has been in 71 films in my db. As you can see with this query, the annotation doesn't work: >>> Film.objects.get(name__istartswith="Saving Private Ryan").actors.all().annotate(film_count=Count('film'))[0].film_count 1 >>> Film.objects.get(name__istartswith="Saving Private Ryan").actors.all().annotate(film_count=Count('film'))[0].film_set.count() 7 and I can't seem to figure out a way to filter it by the film_set.count()

    Read the article

  • Input array is longer than the number of columns in this table

    - by Adam
    I've recently started to use SQLite and began to integrate it into a C# project I'm working on. However, randomly my project will throw the exception: Input array is longer than the number of columns in this table I'm having a hard time trying the trace the problem because it seems to be thrown on a random basis. DataTable table = new DataTable(); //exception is thrown here table = Global.db.ExecuteQuery("SELECT * FROM vm_manager"); Some of the data that gets returned from this query is as follows: http://i.imgur.com/9rlLN.png If anyone has any advice, I'd be grateful. EDIT: I'm unable to show the execute query function as it resides inside a dll from the following sql lite wrapper http://www.codeproject.com/KB/database/cs_sqlitewrapper.aspx

    Read the article

  • Need some tips on my SQL script?

    - by Nano HE
    Hi I plan to create a tale to store the race result like this, Place RaceNumber Gender Name Result 12 0112 Male Mike Lee 1:32:40 16 0117 Female Rose Marry 2:20:40 I confused at the items type definiation. Q1.I am not sure the result can be set to varchar(32) or other type? Q2. and for racenumber, between int(11) and varchar(11), which one is better? Q3. Can I use `UNIQUE KEY` like my way? Q4. Do I need split name to firstName and lastName in my DB table? DROP TABLE IF EXISTS `race_result`; CREATE TABLE IF NOT EXISTS `race_result` ( `id` int(11) NOT NULL auto_increment, `racenumber` int(11) NOT NULL, `gender` enum('male','female') NOT NULL, `name` varchar(16) NOT NULL, `result` varchar(32) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `racenumber` (`racenumber`,`id`) ) ENGINE=MyISAM AUTO_INCREMENT=3 DEFAULT CHARSET=utf8 AUTO_INCREMENT=3;

    Read the article

  • recv returns old data

    - by anon
    This loop is supposed to take data from a socket line by line and put it in a buffer. For some reason, when there is no new data to return, recv returns the last couple lines it got. I was able to stop the bug by commenting out the first recv, but then I cant tell how long the next line will be. I know it's not a while(this->connected){ memset(buf, '\0', sizeof(buf)); recv(this->sock, buf, sizeof(buf), MSG_PEEK); //get length of next message ptr = strstr(buf, "\r\n"); if (ptr == NULL) continue; err = recv(this->sock, buf, (ptr-buf), NULL); //get next message printf("--%db\n%s\n", err, buf); tok[0] = strtok(buf, " "); for(i=1;tok[i-1]!=NULL;i++) tok[i] = strtok(NULL, " "); //do more stuff }

    Read the article

  • Sharepoint Foundation 2010 installation problems

    - by Robert Koritnik
    I'm having problems installing development machine for Sharepoint (Foundation) 2010. This is what I did so far on the same machine: Installed a clean Windows 7 x64 with 4GB of RAM without being part of any domain. Just a simple standalone machine. Enabled IIS related features as described here except IIS6 related ones (two of them) Installed SQL Server 2008 R2 Development Edition (DB Engine and Writer being enabled but not SQL Agent) Installed Visual Studio 2010 Premium Started installing Sharepoint Foundation 2010 with first extracting files, changing config to enable Windows 7 installation and then installed it as Server Farm (then Complete) to avoid installing SQL Express. Created a separate SPF_CONFIG local user with Logon on as a service right. Opened SPF Management Shell and run New-SPConfigurationDatabase so I am able to use a non-domain username (SPF_CONFIG that I created in the previous step) But all I get is this: The outcome after this error is: Database Sharepoint2010Config is created User SPF_CONFIG is added to SQL Server and attached to this newly created database as dbowner and checking SQL server security logins this user has following rights: dbcreator securityadmin public

    Read the article

  • Including uncovered files in Devel::Cover reports

    - by Markus
    I have a project setup like this: bin/fizzbuzz-game.pl lib/FizzBuzz.pm test/TestFizzBuzz.pm test/TestFizzBuzz.t When I run coverage on this, using perl -MDevel::Cover=-db,/tmp/cover_db test/*.t ... I get the following output: ----------------------------------- ------ ------ ------ ------ ------ ------ File stmt bran cond sub time total ----------------------------------- ------ ------ ------ ------ ------ ------ lib/FizzBuzz.pm 100.0 100.0 n/a 100.0 1.4 100.0 test/TestFizzBuzz.pm 100.0 n/a n/a 100.0 97.9 100.0 test/TestFizzBuzz.t 100.0 n/a n/a 100.0 0.7 100.0 Total 100.0 100.0 n/a 100.0 100.0 100.0 ----------------------------------- ------ ------ ------ ------ ------ ------ That is: the totally-uncovered file bin/fizzbuzz-game.pl is not included in the results. How do I fix this?

    Read the article

  • "SELECT TOP", "LEFT OUTER JOIN", "ORDER BY" gives extra rows

    - by Codesleuth
    I have the following Access query I'm running through OLE DB in .NET: SELECT TOP 25 tblClient.ClientCode, tblRegion.Region FROM (tblClient LEFT OUTER JOIN tblRegion ON tblClient.RegionCode = tblRegion.RegionCode) ORDER BY tblRegion.Region There are 431 records within tblClient that have RegionCode set to NULL. For some reason, the query above returns all these 431 records instead of the first 25. If I change the query to ORDER BY tblClient.Client (the name of the client) like so: SELECT TOP 25 tblClient.ClientCode, tblRegion.Region FROM (tblClient LEFT OUTER JOIN tblRegion ON tblClient.RegionCode = tblRegion.RegionCode) ORDER BY tblClient.Client I get the expected result set of 25 records, showing a mixture of region names and NULL values. Why is it that ordering by a field retrieved through a LEFT OUTER JOIN will the TOP clause not work?

    Read the article

  • System.ComponentModel.Component in Visual Studio 2008

    - by Mark A Johnson
    I'm maintaining a .Net 2.0 application using Visual Studio 2008. When the application was built, it was originally in Visual Studio 2003 and made use of the System.ComponentModel.Component class for data access. You can drag and drop commands, connections, etc onto the designer surface of the component. In 2008, the data access classes don't "stick" to the component. I.e., the code for the command does not get generated in the class. when did this change? 2005? is there a replacement for this behavior, perhaps using the db pro edition? Thanks.

    Read the article

  • django admin gives warning "Field 'X' doesn't have a default value"

    - by noam
    I have created two models out of an existing legacy DB , one for articles and one for tags that one can associate with articles: class Article(models.Model): article_id = models.AutoField(primary_key=True) text = models.CharField(max_length=400) class Meta: db_table = u'articles' class Tag(models.Model): tag_id = models.AutoField(primary_key=True) tag = models.CharField(max_length=20) article=models.ForeignKey(Article) class Meta: db_table = u'article_tags' I want to enable adding tags for an article from the admin interface, so my admin.py file looks like this: from models import Article,Tag from django.contrib import admin class TagInline(admin.StackedInline): model = Tag class ArticleAdmin(admin.ModelAdmin): inlines = [TagInline] admin.site.register(Article,ArticleAdmin) The interface looks fine, but when I try to save, I get: Warning at /admin/webserver/article/382/ Field 'tag_id' doesn't have a default value

    Read the article

  • Creating a "less"-like console pager interface for pysqlite3 database

    - by Eric
    I would like to add some interactive capability to a python CLI application I've writen that stores data in a SQLite3 database. Currently, my app reads-in a certain type of file, parses and analyzes, puts the analysis data into the db, and spits the formatted records to stdout (which I generally pipe to a file). There are on-the-order-of a million records in this file. Ideally, I would like to eliminate that text file situation altogether and just loop after that "parse and analyze" part, displaying a screen's worth of records, and allowing the user to page through them and enter some commands that will edit the records. The backend part I know how to do. Can anyone suggest a good starting point for creating that pager frontend either directly in the console (like the pager "less"), through ncurses, or some other system?

    Read the article

  • Export a SQL database into a CSV file and use it with WEKA

    - by Simon
    How can I export a query result from a .sql database into a .csv file? I tried with SELECT * FROM players INTO OUTFILE 'players.csv' FIELDS TERMINATED BY ',' LINES TERMINATED BY ';';` and my .csv file is something like: p1,1,2,3 p2,1,4,5 But they are not in saparated columns, all are in 1 column. I tried to create a .csv file by myself just to try WEKA, something like: p1 1 2 3 p2 1 4 5 But WEKA recognizes p1 1 2 3 as a single attribute. So: how can I export correctly a table from a sql db to a csv file? And how can I use it with WEKA?

    Read the article

  • only 1 record is being inserted

    - by bobobobo
    I'm running an insert statement using OLE DB and an ICommandWithParameters. In the ICommandText, I made sure to set: params.cParamSets = n ; Then cmdTxt-Execute( NULL, IID_NULL, ¶ms, &rowsAffected, NULL ) ; Where n 1, but in my database, all I see is 1 insert happening. The docs say cParamSets is greater than one) can be specified only if DBPROP_MULTIPLEPARAMSETS is VARIANT_TRUE and the command does not return any rowsets. But I set DBPROP_MULTIPLEPARAMSETS in my DBPROPs, and its and INSERT statement so it should not return any rowsets.

    Read the article

  • do while is breaking. How to skip rows PHP

    - by Victor
    Hello all. I have a question probably lame but it made me stuck I have the a db query $query_Recordset10 = "SELECT * FROM products WHERE razdel='mix' AND ID='$ID+1' AND litraj='$litri' ORDER BY ID ASC"; $Recordset10 = mysql_query($query_Recordset10, $victor) or die(mysql_error()); $row_Recordset10 = mysql_fetch_array($Recordset10); $totalRows_Recordset10 = mysql_num_rows($Recordset10); I have do while loop in my page and the idea i to show products matching this criteria. But if the next product is 2 or more ID's ahead my cycle breaks. So is there a way for skipping this row and get the next ID matching the criteria. Thank you very much.

    Read the article

  • Problem with between join for sqlalchemy orm relation.

    - by Gary van der Merwe
    I'm trying to create a relation that has a between join. Here is a shortish example of what I'm trying to do: #!/usr/bin/env python import sqlalchemy as sa from sqlalchemy import orm from sqlalchemy.engine.base import Engine from sqlalchemy.ext.declarative import declarative_base metadata = sa.MetaData() Base = declarative_base(metadata=metadata) engine = sa.create_engine('sqlite:///:memory:') class Network(Base): __tablename__ = "network" id = sa.Column(sa.Integer, primary_key=True) ip_net_addr_db = sa.Column('ip_net_addr', sa.Integer, index=True) ip_broadcast_addr_db = sa.Column('ip_broadcast_addr', sa.Integer, index=True) # This can be determined from the net address and the net mask, but we store # it in the db so that we can join with the address table. ip_net_mask_len = sa.Column(sa.SmallInteger) class Address(Base): __tablename__ = "address" ip_addr_db = sa.Column('ip_addr', sa.Integer, primary_key=True, index=True, unique=True) Network.addresses = orm.relation(Address, primaryjoin=Address.ip_addr_db.between( Network.ip_net_addr_db, Network.ip_broadcast_addr_db), foreign_keys=[Address.ip_addr_db]) metadata.create_all(engine) Session = orm.sessionmaker(bind=engine) Network() I you run this, you get this error: ArgumentError: Could not determine relation direction for primaryjoin condition 'address.ip_addr BETWEEN network.ip_net_addr AND network.ip_broadcast_addr', on relation Network.addresses. Do the columns in 'foreign_keys' represent only the 'foreign' columns in this join condition ? The answer to that question is Yes, but I cant figure out how to tell it that

    Read the article

  • Creating JTable Row Header

    - by Chandu
    Hi all, I'm new to JTable.I'm working in Swings using JTable & Toplink(JPA). I have two buttons "add Row", "Del Row" and I have some records displayed from db. when ever "add row" is clicked a new record, row header should be added to JTable with an auto increment number displayed in sequential order to the JTable Row Header. During deletion using "Del row " button the record has to be deleted & not its corresponding header so that next rows got updated to the previous headers & the auto increment number remain unchanged and always be in sequence. please help me on this regard. Thanks in advance, Chandu

    Read the article

  • mysql and .net: when using tableadapters I'm getting MySqlException "insert command denied for user"

    - by Deveti Putnik
    Hi! I am using mysql as db for my asp.net application. Here are the facts: I am using connection string from web.config which has both username and password. I can do SELECT with tableadapter. When I am trying to do INSERT with tableadapter, I am getting "mysqlexception insert command denied for user" error When I try to do INSERT programatically, i.e. using connection, command object, etc. everything is fine. In this case, I'm reading connection string from web.config, too. This can be only applied to GoDaddy hosting, but on local machine I don't have this kind of problems. Can anyone suggest what can I do to make it work on GoDaddy hosting? Regards, D

    Read the article

  • Iterator blocks in Clojure?

    - by Checkers
    I am using clojure.contrib.sql to fetch some records from an SQLite database. (defn read-all-foo [] (with-connection *db* (with-query-results res ["select * from foo"] (into [] res)))) Now, I don't really want to realize the whole sequence before returning from the function (i.e. I want to keep it lazy), but if I return res directly or wrap it some kind of lazy wrapper (for example I want to make a certain map transformation on result sequence), SQL-related bindings will be reset and connection will be closed after I return, so realizing the sequence will throw an exception. How can I enclose the whole function in a closure and return a kind of iterator block (like yield in C# or Python)? Or is there another way to return a lazy sequence from this function?

    Read the article

  • First ASM program

    - by Tal
    Hello, I'm trying to run my first ASM 8086 program on MASM on Windows Vista 64bit OS. I put this program on my MASM editor: .model small .stack .data message db "Hello world, I'm learning Assembly !!!", "$" .code main proc mov ax,seg message mov ds,ax mov ah,09 lea dx,message int 21h mov ax,4c00h int 21h main endp end main and the MASM editor gives me this output that I got no idea what's wrong with the program: Assembling: D:\masm32\First.asm D:\masm32\First.asm(9) : error A2004: symbol type conflict D:\masm32\First.asm(19) : warning A4023: with /coff switch, leading underscore required for start address : main _ Assembly Error Where is the problem with this code? This is my first ASM program please remember. Thank you :)

    Read the article

  • Why would you want a case sensitive database?

    - by Khorkrak
    What are some reasons for choosing a case sensitive collation over a case insensitive one? I can see perhaps a modest performance gain for the DB engine in doing string comparisons. Is that it? If your data is set to all lower or uppercase then case sensitive could be reasonable but it's a disaster if you store mixed case data and then try to query it. You have then apply a lower() function on the column so that it'll match the corresponding lower case string literal. This prevents index usage in every dbms. So wondering why anyone would use such an option.

    Read the article

  • What is the best area for freelance / small business developer work - and how to find it

    - by Olav
    Most freelance developers are essentially contractors - working full time and often hired through another company. What are the best options for real freelancing or small business built around development skills? (I am only looking for local clients, odesk etc is not interesting) Most options open for small companies are web sites. But most clients don't want to pay much, and there are lots of people that are willing to do it for very little money. Think if you want to do money that way you must be a "web site" business. Think the best option should be to work for companies spending a lot of resources on web and db? How could I get in touch with them? Other ideas? Part of my concept is to use some off-shoring to complement my skillset, so i should be able to do PHP or Drupal even if I am not at expert level.

    Read the article

  • Google Maps - user to pinpoint a location

    - by JohnB
    Hi, Is it possible to allow users of my website to mark places on a map I display using Google Maps API? I need to then save that location coordinates to a db. I've been looking through the google maps API, I found that I can use the web service to do searches like this: http://maps.google.com/maps/geo?q=Maine,+United+States&output=json&oe=utf8\&sensor=false&key=my_key But I am not sure it's working on a house number level (which I need it to) and I'm not sure how to display a 'did you mean?' to the user when he misspells the address.. Anyone have an idea? Thanks,

    Read the article

  • mysql query performance help

    - by Stefano
    Hi I have a quite large table storing words contained in email messages mysql> explain t_message_words; +----------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+---------+------+-----+---------+----------------+ | mwr_key | int(11) | NO | PRI | NULL | auto_increment | | mwr_message_id | int(11) | NO | MUL | NULL | | | mwr_word_id | int(11) | NO | MUL | NULL | | | mwr_count | int(11) | NO | | 0 | | +----------------+---------+------+-----+---------+----------------+ table contains about 100M rows mwr_message_id is a FK to messages table mwr_word_id is a FK to words table mwr_count is the number of occurrencies of word mwr_word_id in message mwr_message_id To calculate most used words, I use the following query SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id ORDER BY word_count DESC LIMIT 100; that runs almost forever (more than half an hour on the test server) mysql> show processlist; +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- | Id | User | Host | db | Command | Time | State | Info +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- processlist | 41 | root | localhost:3148 | tst_db | Query | 1955 | Copying to tmp table | SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id | +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- 3 rows in set (0.00 sec) Is there anything I can do to "speed up" the query (apart from adding more ram, more cpu, faster disks)? thank you in advance stefano

    Read the article

  • SQL Server 2005 script with join across Database Servers

    - by Robin Day
    I have the following script which I use to give me a simple "diff" between tables on two different databases. (Note: In reality my comparison is on a lot more than just an ID) SELECT MyTableA.MyId, MyTableB.MyId FROM MyDataBaseA..MyTable MyTableA FULL OUTER JOIN MyDataBaseB..MyTable MyTableB ON MyTableA.MyId = MyTableB.MyId WHERE MyTableA.MyId IS NULL OR MyTableB.MyId IS NULL I now need to run this script on two databases that exist on different servers. At the moment my solution is to backup the database from one server, restore it to the other and then run the script. I'm pretty sure this is possible, however, is this likely to be a can of worms? This is a very rare task I need to perform and if it involves a large number of DB setting changes then I will probably stick to my backup method.

    Read the article

< Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >