Search Results

Search found 10741 results on 430 pages for 'linton day'.

Page 197/430 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • How do I aggregate activerecord model data for a specific time period?

    - by gsiener
    I'm collecting data from a system every ~10s (this time difference varies due to communication time with networked devices). I'd like to calculate averages and sums of the stored values for this activerecord model on a daily basis. All records are stored in UTC. What's the correct way to sum and average values for, e.g., the previous day from midnight to midnight EST? I can do this in sql but don't know the "rails way" to make this calculation.

    Read the article

  • liferay portal with asp.net

    - by harrisonmeister
    Hi This is a java noob related question. Have been looking at Liferay portal technology, however I'm a dotnet (c# or vb) developer by day. Does anyone know if you can use the install of liferay to host dotnet code, not just by using the iFrame method of redirecting to a different site - that just sucks in my opinion Alternatively is the saner option to go down the port my skills back to java? Thanks Mark

    Read the article

  • How to quote and reference MS SQL table and field names

    - by artvolk
    Good day! Despite the fact LINQ2SQL and ADO.NET Entity Framework exists there were situations when I need to revert to plain old DataSet (not typed) and friends. While writing SQL for SqlCommand: Is it needed to quote field and table names with []? Is it good to prefix table names with [dbo] I use to use this syntax: SqlCommand command = new SqlCommand("SELECT [Field1], [Field2] FROM [dbo].[TableName]", connection); May be there is a better way? Thanks in advance!

    Read the article

  • what do you do while code is compiling

    - by Jacob
    I'm looking for the best idea for what to do while code is compiling or tests are running. Typically around 5 minutes of thumb twiddling. Only so many cups of coffee can be made and drunk in a day, and I don't want to be seen always in the kitchen or bothering other people.

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • What's the worst working environment you've had to suffer?

    - by John
    We'll leave "worst job ever" for another day if it wasn't already done... but after some recent discussions on good environments, what is the worst you've had? I've always been quite lucky - seats that go up and down, some kind of natural light, etc. But I think I dodged a bullet... what horror stories can you share?

    Read the article

  • Are functions declared before or after calling them?

    - by DMan
    I've made this a community wiki... Okay, I was looking through someone's code one day and I was annoyed how they declared all their functions first and then later called them below. I guess I'm use to Visual Studio's automatically generated functions, that are made after you call them- and I was wondering, which way do you prefer? Or is there a standard on these kind of things?

    Read the article

  • CakePHP: How can I change this find call to include all records that do not exist in the associated

    - by Stephen
    I have a few tables with the following relationships: Company hasMany Jobs, Employees, and Trucks, Users I've got all my foreign keys set up properly, along with the tables' Models, Controllers, and Views. Originally, the Jobs table had a boolean field called "assigned". The following find operation (from the JobsController) successfully returns all employees, all trucks, and any jobs that are not assigned and fall on a certain day for a single company (without returning users by utilizing the containable behavior): $this->set('resources', $this->Job->Company->find('first', array( 'conditions' => array( 'Company.id' => $company_id ), 'contain' => array( 'Employee', 'Truck', 'Job' => array( 'conditions' => array( 'Job.assigned' => false, 'Job.pickup_date' => date('Y-m-d', strtotime('Today')); ) ) ) ))); Now, since writing this code, I decided to do a lot more with the job assignments. So I've created a new model "Assignment" that belongsTo Truck and belongsTo Job. I've added the hasMany Assignments to both the Truck model and the Jobs Model. I have both foreign keys in the assignments table, along with some other assignment fields. Now, I'm trying to get the same information above, only instead of checking the assigned field from the job table, I want to check the assignments table to ensure that the job does not exist there. I can no longer use the containable behavior if I'm going to use the "joins" feature of the find method due to mysql errors (according to the cookbook). But, the following query returns all jobs, even if they fall on different days. $this->set('resources', $this->Job->Company->find('first', array( 'joins' => array( array( 'table' => 'employees', 'alias' => 'Employee', 'type' => 'LEFT', 'conditions' => array( 'Company.id = Employee.company_id' ) ), array( 'table' => 'trucks', 'alias' => 'Truck', 'type' => 'LEFT', 'conditions' => array( 'Company.id = Truck.company_id' ) ), array( 'table' => 'jobs', 'alias' => 'Job', 'type' => 'LEFT', 'conditions' => array( 'Company.id = Job.company_id' ) ), array( 'table' => 'assignments', 'alias' => 'Assignment', 'type' => 'LEFT', 'conditions' => array( 'Job.id = Assignment.job_id' ) ) ), 'conditions' => array( 'Job.pickup_date' => $day, 'Company.id' => $company_id, 'Assignment.job_id IS NULL' ) )));

    Read the article

  • Get tweets of a public twitter profile

    - by Denzil
    Hi folks, I have a list of usernames on Twitter whose profiles are public. I wish to get "all the tweets" they have posted from the day they formed their profile. I checked Twitter4J examples http://github.com/yusuke/twitter4j/blob/master/twitter4j-examples/src/main/java/twitter4j/examples/GetTimelines.java . According to the Twitter API documentation, only the 20 most recent tweets are returned. Is there anyway I could perform my task?

    Read the article

  • Balancing heuristics (for timetable problem)

    - by genesiss
    I'm writing a genetic algorithm for generating timetables. At the moment I'm using these two heuristics: Number of holes between lectures in one day (related) (less holes - bigger score) Each hour has some value, so for each timetable I sum values for hours when lectures are on. (lectures at more appropriate hours - bigger score) I want to balance these two heuristics, so the algorithm wouldn't favor neither one. What would be the best way to achieve this?

    Read the article

  • Best PHP timezone list

    - by jimbo
    I have looked over the PHP list of supported timezones, but the whole list is a little long to include in a drop-down menu, for the user to select his or her timezone. Is there a list with the main city/area on that can be used? My geography is terrible and add that to not knowing all the area and which timezone they fall into it could be a long day to construct the list my self! Thanks in advance

    Read the article

  • How do I add NSDecimalNumbers?

    - by Terry B
    OK this may be the dumb question of the day, but supposing I have a class with : NSDecimalNumber *numOne = [NSDecimalNumber numberWithFloat:1.0]; NSDecimalNumber *numTwo = [NSDecimalNumber numberWithFloat:2.0]; NSDecimalNumber *numThree = [NSDecimalNumber numberWithFloat:3.0]; Why can't I have a function that adds those numbers: - (NSDecimalNumber *)addThem { return (self.numOne + self.numTwo + self.numThree); } I apologize in advance for being an idiot, and thanks!

    Read the article

  • who wrote 250k unit tests for webkit?

    - by amwinter
    assuming a yield of 3 per hour, that's 83000 hours. 8 hours a day makes 10,500 days, divide by thirty to get 342 mythical man months. I call them mythical because writing 125 tests per person per week is unreal. can any wise soul out there on SO shed some light on what sort of mythical men write unreal quantities of tests for large software projects? thank you.

    Read the article

  • SSI or PHP Include()?

    - by Ozzy
    Hi all, basically i am launching a site soon and i predict ALOT of traffic. For scenarios sake, lets say i will have 1m uniques a day. The data will be static but i need to have includes aswell I will only include a html page inside another html page, nothing dynamic (i have my reasons that i wont disclose to keep this simple) My question is, performance wise what is faster or

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • Do you know of a log4net appender which rolls on date, but let's you limit the total number of files

    - by Michal Drozdowicz
    Hi, I need to define an appender for log4net in a way that I get one log file for each day, but the total number of files are limited to, let's say, 30. That is I want to keep only the logs not older then 30 days, delete the older ones. I've tried doing it with RollingFileAppender, but it seems that specifying a limit of files to keep is not supported. Do you know of an alternative solution that I could use?

    Read the article

  • SQL Insert into ... values ( SELECT ... FROM ... )

    - by Shadow_x99
    I am trying to insert into a table using the input from another table. Although this is entirely feasible for many database engines, I always seem to struggle to remember the correct syntax for the SQL-Engine of the day (MySQL, Oracle, SQLServer, Informix, DB2). I've been wondering if there is a silver-bullet syntax coming from an SQL Standards (For example, SQL92) that would allow me to insert the values without worrying about the underlying database.

    Read the article

  • Need some sort of "conditional grouping" in MySQL.

    - by serg555
    I have Article table: id | type | date ----------------------- 1 | A | 2010-01-01 2 | A | 2010-01-01 3 | B | 2010-01-01 Field type can be A, B or C. I need to run a report that would return how many articles of each type there is per every day, like this: date | count(type="A") | count(type="B") | count(type="C") ----------------------------------------------------- 2010-01-01 | 2 | 1 | 0 2010-01-02 | 5 | 6 | 7 Currently I am running 3 queries for every type and then manually merging the results select date, count(id) from article where type="A" group by date Is it possible to do this in one query? (in pure sql, no stored procedures or anything like that). Thanks

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >