Search Results

Search found 61830 results on 2474 pages for 'efficient time use'.

Page 156/2474 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Date Time conversion

    - by Zerotoinfinite
    Hi All, I am getting datetime as 1/2/2010 11:29:30 which I am displaying in a gridview. and I want to convert it to "Feb 1, 2010 at 11:29". Please let me know how to convert it like this. Thanks in advance. Note: I am using asp.net with C#.

    Read the article

  • determine/improve excel opening time

    - by Dan
    I have an add-in (actualy 2 one for 2003 and one for 2007). I have been receiving some complains that the office product is opening slower when my add-in is installed. Is there a way to improve the opening of office product? Is there a way to compare the opening of the office product with/without the addin

    Read the article

  • delete taking really long time! mysql

    - by every_answer_gets_a_point
    i am doing this: delete calibration_2009 from calibration_2009 join batchinfo_2009 on calibration_2009.rowid = batchinfo_2009.rowid where batchinfo_2009.reporttime like '%2010%'; both tables have about 500k lines of data i suspect that 250k match the criteria to be deleted so far it has been running for 2 hours!!! is there something wrong?

    Read the article

  • Need to sync two lists with atrribute time, but times aren't equal

    - by virgula24
    I gonna try to describe my problem the best i can. I have two lists, one with audio frames and other with color frames (not relevant). Both of them have timestamps, they were captured at the same moment but at different instants. So, i have like this: index COLOR AUDIO 0 841 846 1 873 897 2 905 948 3 940 1000 the frames start at high numbers because they were captured and then trimmed to specific parts, but im shot, frame 0 is synced with only 5ms apart(timestamp in ms). On every case i have, the audio frames count is less than the color count. I need to make them have the same count. The stating frames may be coloraudio, color

    Read the article

  • RegEx - Time Validation ((h)h:mm)

    - by Josh
    /^\d{1,2}[:][0-5][0-9]$/ is what I have. this limits minutes to 00-59. It does not, however, limit hours to between 0 and 12. For similarity and uniformity I would like to do this with RegEx alone if possible. Further-more I would like the first digit to be optional. i.e. 09:30 accepted as well as 9:30. I played around with ranges, but something out of range is always acceptable.

    Read the article

  • setContentView taking long time (10-15 seconds) to execute

    - by Paul
    I have a large activity that contains 100 or more buttons. But it's working fine once loaded. Problem however is loading. From clicking its launch icon to getting the first view it takes 10-12 seconds. Until the first view, it shows gray title bar in black background. At least, I want to show a simple progress bar or dialog while its loading. But it seems like you cannot show anything before setContentView executed. I think I have tried everything I could without any success. If you can give me any hint or idea, I would be thankful. UPDATE: I found a dramatic resolution. It takes now a second to load the view. I didn't use splash, thread or async task at all - BTW, don't try to use thread or async on UI because Android UI is not thread-safe. Problem was that those buttons were based on a custom class that requires initialization to load same resource. - so 100 or more file operations were happening on setContentView. Making them a just single loading solved my problem.

    Read the article

  • SerialPort.Open() takes very long time to execute

    - by narancha
    I'm having trouble when I'm trying to use the SerialPort.Open() function. Sometimes it opens in 5 seconds and sometimes it takes several minutes. This is my code: public void InvokeSerialPortdetectedEvent(string s) { SerialPortDetectEvent.Invoke(this, s); // the invoked funktion is called PortHandeler_SerialPortDetectEvent() } void PortHandeler_SerialPortDetectEvent(object sender, string name) { OpenSerialPort(name); AddDongleToDeviceList(); } private void OpenSerialPort(string Name) { if (serialPort1.IsOpen) { return; } serialPort1.PortName = Name; try { serialPort1.Open(); if (serialPort1.IsOpen) { Console.Write("Open Serialport: " + Name); } } catch (Exception e) { Console.Write(e.Message); Console.Write(e.StackTrace); } }

    Read the article

  • % operator for time calculation

    - by Chris
    I am trying to display minutes and seconds based on a number of seconds. I have: float seconds = 200; float mins = seconds / 60.0; float sec = mins % 60.0; [timeIndexLabel setText:[NSString stringWithFormat:@"%.2f , %.2f", mins,seconds]]; But I get an error: invalid operands of types 'float' and 'double' to binary 'operator%' And I don't understand why... Can someone throw me a bone!?

    Read the article

  • TIME REDUCE(OPTIMISE QUERY)

    - by user2527657
    select a.userid,(select firstName from user where userid=NOTUSED.userid) as z, (select max(login_time) from userLoginTime AS b where userid = a.user_id GROUP BY b.user_id ORDER BY b.user_id) as y From(SELECT DISTINCT a.user_id FROM user AS a LEFT OUTER JOIN (SELECT (userid) FROM userlogintime where serialid=15400012)AS b ON user.user_id = b.user_id where a.Serialid=15400012 AND b.userid IS NULL) NOTUSED, Relation r, user a where r.childuserid = NOTUSED.userid and guarduserid = a.userid

    Read the article

  • juju bootstrap fails with a local environment, why?

    - by Braiam
    Each time I try to bootstrap juju using a local enviroment it fails starting the juju-db-braiam-local script as follows: $ sudo juju --debug --verbose bootstrap 2013-10-20 02:28:53 INFO juju.provider.local environprovider.go:32 opening environment "local" 2013-10-20 02:28:53 DEBUG juju.provider.local environ.go:210 found "10.0.3.1" as address for "lxcbr0" 2013-10-20 02:28:53 DEBUG juju.provider.local environ.go:234 checking 10.0.3.1:8040 to see if machine agent running storage listener 2013-10-20 02:28:53 DEBUG juju.provider.local environ.go:237 nope, start some 2013-10-20 02:28:53 DEBUG juju.environs.tools storage.go:87 Uploading tools for [raring precise] 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:109 looking for: juju 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:150 checking: /usr/bin/jujud 2013-10-20 02:28:53 INFO juju.environs.tools build.go:156 found existing jujud 2013-10-20 02:28:53 INFO juju.environs.tools build.go:166 target: /tmp/juju-tools243949228/jujud 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:217 forcing version to 1.14.1.1 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:37 adding entry: &tar.Header{Name:"FORCE-VERSION", Mode:420, Uid:0, Gid:0, Size:8, ModTime:time.Time{sec:63517832933, nsec:278894120, loc:(*time.Location)(0x108fda0)}, Typeflag:0x30, Linkname:"", Uname:"ubuntu", Gname:"ubuntu", Devmajor:0, Devminor:0, AccessTime:time.Time{sec:63517832933, nsec:278894120, loc:(*time.Location)(0x108fda0)}, ChangeTime:time.Time{sec:63517832933, nsec:278894120, loc:(*time.Location)(0x108fda0)}} 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:37 adding entry: &tar.Header{Name:"jujud", Mode:493, Uid:0, Gid:0, Size:19179512, ModTime:time.Time{sec:63517832933, nsec:274894120, loc:(*time.Location)(0x108fda0)}, Typeflag:0x30, Linkname:"", Uname:"ubuntu", Gname:"ubuntu", Devmajor:0, Devminor:0, AccessTime:time.Time{sec:63517832933, nsec:274894120, loc:(*time.Location)(0x108fda0)}, ChangeTime:time.Time{sec:63517832933, nsec:274894120, loc:(*time.Location)(0x108fda0)}} 2013-10-20 02:28:55 INFO juju.environs.tools storage.go:106 built 1.14.1.1-raring-amd64 (4196kB) 2013-10-20 02:28:55 INFO juju.environs.tools storage.go:112 uploading 1.14.1.1-precise-amd64 2013-10-20 02:28:55 INFO juju.environs.tools storage.go:112 uploading 1.14.1.1-raring-amd64 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:29 reading tools with major version 1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:34 filtering tools by version: 1.14.1.1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:37 filtering tools by series: precise 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:41 reading v1.* tools 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-precise-amd64 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-raring-amd64 2013-10-20 02:28:55 INFO juju.environs.boostrap bootstrap.go:57 bootstrapping environment "local" 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:29 reading tools with major version 1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:34 filtering tools by version: 1.14.1.1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:37 filtering tools by series: precise 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:41 reading v1.* tools 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-precise-amd64 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-raring-amd64 2013-10-20 02:28:55 DEBUG juju.provider.local environ.go:395 create mongo journal dir: /home/braiam/.juju/local/db/journal 2013-10-20 02:28:55 DEBUG juju.provider.local environ.go:401 generate server cert 2013-10-20 02:28:55 INFO juju.provider.local environ.go:421 installing service juju-db-braiam-local to /etc/init 2013-10-20 02:28:56 ERROR juju.provider.local environ.go:423 could not install mongo service: exec ["start" "juju-db-braiam-local"]: exit status 1 (start: Job failed to start) 2013-10-20 02:28:56 ERROR juju supercommand.go:282 command failed: exec ["start" "juju-db-braiam-local"]: exit status 1 (start: Job failed to start) error: exec ["start" "juju-db-braiam-local"]: exit status 1 (start: Job failed to start) What is the reason for this error and how to solve it?

    Read the article

  • MySQL table data transformation -- how can I dis-aggregate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • MySQL table data transformation -- how can I dis-aggreate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • C++: how to truncate the double in efficient way?

    - by Arman
    Hello, I would like to truncate the float to 4 digits. Are there some efficient way to do that? My current solution is: double roundDBL(double d,unsigned int p=4) { unsigned int fac=pow(10,p); double facinv=1.0/static_cast<double>(fac); double x=static_cast<unsigned int>(x*fac)/facinv; return x; } but using pow and delete seems to me not so efficient. kind regards Arman.

    Read the article

  • Most efficient way to update a MySQL Database on a Linux host with that of an ASP.Net Form on Window

    - by NJTechGuy
    My kind webhost (1and1) royally asked me to go elsewhere to do something like this. I have 2 sites. One of them was developed by a .Net programmer. Now I am contracted to implement a PHP site and fetch data from the .Net site. There is an ASP.Net form that a customer fills and when they hit submit, the data gets stored in SQL Server DB. How do I also store the same data in MySQL parallelly? I cannot directly use some database connectors with ASP.Net since MySQL connectivity is not supported on 1and1 Windows hosting (biz account, no less!). What I thought of is to publish an RSS feed of entries in ASP.Net site and routinely scrape that data into MySQL on Linux host. It is an overkill, I know. Not efficient. I thought I would pick the best brains on SOF to get a different, efficient opinion. Thanks in advance guys...

    Read the article

  • How efficient is an if statement compared to a test that doesn't use an if? (C++)

    - by Keand64
    I need a program to get the smaller of two numbers, and I'm wondering if using a standard "if x is less than y" int a, b, low; if (a < b) low = a; else low = a; is more or less efficient than this: int a, b, low; low = b + ((a - b) & ((a - b) >> 31)); (or the variation of putting int delta = a - b at the top and rerplacing instances of a - b with that). I'm just wondering which one of these would be more efficient (or if the difference is to miniscule to be relevant), and the efficiency of if-else statements versus alternatives in general.

    Read the article

  • I need an efficient protocol between webservices that are more or less supported by all major langua

    - by corgrath
    Hey all. I am looking for a fast and efficient protocol that can be used between different web services to send text-data (not binary data). Doesn't matter if the protocol is binary or text base. Some conditions: I has to be more "efficient" than normal XML which adds a lot of extra data and the tools to read/write is too heavy It has to be "supported" by most major languages, meaning it cannot only be available for one specific language. At the moment, both Java and PHP have to be able to talk to each other using this protocol. I have already looked at: XML - which I am currently using. Hessian 2 -which works perfectly in Java, but the PHP-support is out of date JSON -the different between JSON and XML is only minor Any suggestions are welcome!

    Read the article

  • Why is writing a compiler in a functional language so efficient and easier?

    - by wvd
    Hello all, I've been thinking of this question very long, but really couldn't find the answer on Google as well a similar question on Stackoverflow. If there is a duplicate, I'm sorry for that. A lot of people seem to say that writing compilers and other language tools in functional languages such as OCaml and Haskell is much more efficient and easier then writing them in imperative languages. Is this true? And if so -- why is so efficient and easy to write them in functional languages instead of in an imperative language, like C? Also -- isn't a language tool in a functional language slower then in some low-level language like C? Thanks in advance, William v. Doorn

    Read the article

  • Efficient way to combine results of two database queries.

    - by ensnare
    I have two tables on different servers, and I'd like some help finding an efficient way to combine and match the datasets. Here's an example: From server 1, which holds our stories, I perform a query like: query = """SELECT author_id, title, text FROM stories ORDER BY timestamp_created DESC LIMIT 10 """ results = DB.getAll(query) for i in range(len(results)): #Build a string of author_ids, e.g. '1314,4134,2624,2342' But, I'd like to fetch some info about each author_id from server 2: query = """SELECT id, avatar_url FROM members WHERE id IN (%s) """ values = (uid_list) results = DB.getAll(query, values) Now I need some way to combine these two queries so I have a dict that has the story as well as avatar_url and member_id. If this data were on one server, it would be a simple join that would look like: SELECT * FROM members, stories WHERE members.id = stories.author_id But since we store the data on multiple servers, this is not possible. What is the most efficient way to do this? Thanks.

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >