Search Results

Search found 5849 results on 234 pages for 'partition scheme'.

Page 223/234 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • Problem with stack based implementation of function 0x42 of int 0x13

    - by IceCoder
    I'm trying a new approach to int 0x13 (just to learn more about the way the system works): using stack to create a DAP.. Assuming that DL contains the disk number, AX contains the address of the bootable entry in PT, DS is updated to the right segment and the stack is correctly set, this is the code: push DWORD 0x00000000 add ax, 0x0008 mov si, ax push DWORD [ds:(si)] push DWORD 0x00007c00 push WORD 0x0001 push WORD 0x0010 push ss pop ds mov si, sp mov sp, bp mov ah, 0x42 int 0x13 As you can see: I push the dap structure onto the stack, update DS:SI in order to point to it, DL is already set, then set AX to 0x42 and call int 0x13 the result is error 0x01 in AH and obviously CF set. No sectors are transferred. I checked the stack trace endlessly and it is ok, the partition table is ok too.. I cannot figure out what I'm missing... This is the stack trace portion of the disk address packet: 0x000079ea: 10 00 adc %al,(%bx,%si) 0x000079ec: 01 00 add %ax,(%bx,%si) 0x000079ee: 00 7c 00 add %bh,0x0(%si) 0x000079f1: 00 00 add %al,(%bx,%si) 0x000079f3: 08 00 or %al,(%bx,%si) 0x000079f5: 00 00 add %al,(%bx,%si) 0x000079f7: 00 00 add %al,(%bx,%si) 0x000079f9: 00 a0 07 be add %ah,-0x41f9(%bx,%si) I'm using qemu latest version and trying to read from hard drive (0x80), have also tried with a 4bytes alignment for the structure with the same result (CF 1 AH 0x01), the extensions are present.

    Read the article

  • Why are my attempts to open a file using open for writing failing? Ada 95

    - by mat_geek
    When I attempt to open a file to write to I get an Ada.IO_Exceptions.Name_Error. The procedure call is Ada.Text_IO.Open The file name is "C:\CC_TEST_LOG.TXT". This file does not exist. This is on Windows XP on an NTFS partition. The user has permissions to create and write to the directory. The filename is well under the WIN32 max path length. name_2 : String := "C:\CC_TEST_LOG.TXT" if name_2'last > name_2'first then begin Ada.Text_IO.Open(file, Ada.Text_IO.Out_File, name_2); Ada.Text_IO.Put_Line( "CC_Test_Utils: LogFile: ERROR: Open, File " & name_2); return; exception when The_Error : others => Ada.Text_IO.Put_Line( "CC_Test_Utils: LogFile: ERROR: Open Failed; " & Ada.Exceptions.Exception_Name(The_Error) & ", File " & name_2); end; end if;

    Read the article

  • C compiler cannot create executables when trying to build Binutils

    - by Koning Baard XIV
    I am trying to build Linux From Scratch, and now I am at chapter 5.4, which tells me how to build Binutils. I have binutils 2.20's source code, but when I try to build it: time { ./binutils-2.20/configure --target=$LFS_TGT --prefix=/tools --disable-nls --disable-werror ; } it gives me an error: checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-lfs-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether ln works... yes checking whether ln -s works... yes checking for a sed that does not truncate output... /bin/sed checking for gawk... gawk checking for gcc... GCC checking for C compiler default output file name... configure: error: in `/media/LFS': configure: error: C compiler cannot create executables See `config.log' for more details. You can see my config.log at pastebin.com: http://pastebin.com/hX7v5KLn I have just installed Ubuntu 10.04, and reinstalled GCC and installed G++. Also, the build is done by a non-root, non-admin user called 'lfs' (which is also described in Linux From Scratch), and on a different partition than where the system is installed. Can anyone help me? Thanks

    Read the article

  • On Windows 7: Same path but Explorer & Java see different files than Powershell

    - by Ukko
    Submitted for your approval, a story about a poor little java process trapped in the twilight zone... Before I throw up my hands and just say that the NTFS partition is borked is there any rational explanation of what I am seeing. I have a file with a path like this C:\Program Files\Company\product\config\file.xml I am reading this file after an upgrade and seeing something wonky. Eclipse and my Java app are still seeing the old version of this file while some other programs see the new version. The test that convinced my it was not my fat finger that caused the problem was this: In Explorer I entered the above path and Explorer displayed the old version of the file. Forcing Explorer to reload via Ctrl-F5 still yields the old version. This is the behavior I get in Java. Now in PowerShell I enter more "C:\Program Files\Company\product\config\file.xml" I cut and past the path from Explorer to make sure I am not screwing anything up and it shows me the new version of the file. So for the programming aspect of this, is there a cache or some system component that would be storing this stale reference. Am I responsible for checking or reseting that for some class of files. I can imagine somebody being "creative" in how xml files are processed to provide some bell or whistle. But it could be a case of just being borked. Any insights appreciated...Thanks!

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • MongoDB - proper use of collections?

    - by zmg
    In Mongo my understanding is that you can have databases and collections. I'm working on a social-type app that will have blogs and comments (among other things) and had previously be using MySQL and pretty heavy partitioning in an attempt to limit possible concurrency issues. With MySQL I've stuffed all my user data into a _user database with several tables to further partition the data (blogs, pages, etc). My immediate reaction with Mongo would be to create a 'users' database with one collection per user. In this way user 'zach' blog entries would go into the 'zach' collection with associated comments and such becoming sub-objects in the same collection. Basically like dynamically creating one table per user in MySQL, but apparently without the complexity and limitations that might impose. Of course since I haven't really used Mongo before I'm having trouble gauging the (ahem..) quality of this idea and the potential problems it might cause down the road. I'd like user data to be treated a lot like a users directory in a *nix environment where user created/non-shared (mostly) gets put into one place (currently with MySQL that would be the appname_users as mentioned above). Most of the users data will be specific to the users page(s). Some of the user data which is queried across all site users (searchable user profiles) is currently kept in a separate database/table and I expect things like this could be put into a appname_system database and be broken up into collections and/or application specific databases (appname_profiles). Anyway, since the available documentation on this is currently a little thin and my experience is extremely limited I thought I might find a little guidance from someone with a better working understanding of the system. On the plus side I'd really already been attempting to treat MySQL as a schema-less document-store and doing this with Mongo seems much more intuitive/sane/rational so I'm really looking forward to getting started. Thanks, Zach

    Read the article

  • Compiler Errors...it ran yesterday!?

    - by howdytest
    This is a pre-existing Java project being run in Eclipse 3.5.2 32 bit.. Day 1: Install Java SE 6 Update 20 JDK. Experience Crash in Eclipse. Install Java 5. Same problem-(uninstall java 5). Re-install Java 6. Install Eclipse 3.3.1. Install Eclipse 3.5.2. 32-bit. No problems. Run Eclipse 3.5.2. 64-bit. No problems. Set up the project, configure, and run. No problems. Day 2: Load Eclipse to start a new project. Previous project now has 940 errors. Error Type is "Java Problem". The project ran 100% without a problem on Day 1. The only thing that happened between Day 1 and Day 2 was restarting my computer. I just tried to recreate the project, step by step, and am still getting the same errors. I know it's not the code -- it was working. Not to mention that it's an opensource project, such a problem would be documented. I'm thinking something is wrong with my Java install. Or, perhaps, it's a 32-bit/64-bit problem. I'm running win7 64bit. So before formatting my window's partition, I thought I'd throw the problem your way to see if anyone knows what's going on. Thanks.

    Read the article

  • Looking for fast, minimal, preferrably free disc cloning software [closed]

    - by Dave
    We have to test our application installation and functionality on many Windows operating system versions and languages (XP, Vista, Win7; English, Spanish, Portuguese, etc; 32-bit & b4-bit.) While we can do much of this in virtual machines, we have noticed that VM's sometimes hide problems, or raise false bugs. So, we need to do "bare metal" OS installation for much of our testing. I have been using Acronis True Image for the past year, and am not impressed. It often gives random errors which require a reboot, and is really slow. For example, when trying to restore an image, it goes through a "Locking partition" cycle about three times (once after you click OK on each step of the wizard), each of which can take 5 minutes to complete. This all happens BEFORE it actually starts the image copy, which is sometimes quick (3-5 minutes), sometimes long (hours). The size of all of our images are roughly the same, so that is not related. So, anyway, I'm looking to switch to something else: I only need very basic functionality--just creating images of entire discs, and then restoring those images onto the exact same hard drive at a later date. That's it. I'm not opposed to paying for a good piece of software, but if there is something free out there that does the job well, that would be a preference. My OS on which the imaging software would run is Windows Vista, but a bootable media (into a Linux flavor) would be fine also, as long as its quick to use and reliable. Recommendations? (Also, moderators, if this should be a CW, I'll be happy to mark it as such; unclear about the rules there.)

    Read the article

  • How to avoid multiple, unused has_many associations when using multiple models for the same entity (

    - by mikep
    Hello, I'm looking for a nice, Ruby/Rails-esque solution for something. I'm trying to split up some data using multiple tables, rather than just using one gigantic table. My reasoning is pretty much to try and avoid the performance drop that would come with having a big table. So, rather than have one table called books, I have multiple tables: books1, books2, books3, etc. (I know that I could use a partition, but, for now, I've decided to go the 'multiple tables' route.) Each user has their books placed into a specific table. The actual book table is chosen when the user is created, and all of their books go into the same table. The goal is to try and keep each table pretty much even -- but that's a different issue. One thing I don't particularly want to have is a bunch of unused associations in the User class. Right now, it looks like I'd have to do the following: class User < ActiveRecord::Base has_many :books1, :books2, :books3, :books4, :books5 end class Books1 < ActiveRecord::Base belongs_to :user end class Books2 < ActiveRecord::Base belongs_to :user end First off, for each specific user, only one of the book tables would be usable/applicable, since all of a user's books are stored in the same table. So, only one of the associations would be in use at any time and any other has_many :bookX association that was loaded would be a waste. I don't really know Ruby/Rails does internally with all of those has_many associations though, so maybe it's not so bad. But right now I'm thinking that it's really wasteful, and that there may just be a better, more efficient way of doing this. Is there's some sort of special Ruby/Rails methodology that could be applied here to avoid having to have all of those has_many associations? Also, does anyone have any advice on how to abstract the fact that there's multiple book tables behind a single books model/class?

    Read the article

  • How do you make life easier for yourself when developing a really large database

    - by Hannes de Jager
    I am busy developing 2 web based systems with MySql databases and the amount of tables/views/stored routines is really becoming a lot and it is more and more challenging to handle the complexity. Now in programming languages we have namespacing e.g. Java packages, C++ namespaces to partition the software, grouping it together to make things more understandable. Databases on the other hand have more of a flat structure (MySql at least) e.g. tables and stored procedures are on the same level. So one have to be more creative, creating naming conventions, perhaps use more than one database or using tools to visualize things. What methods do you use to ease the pain? To be effective while developing your databases? To not get lost in a sea of tables and fields and stored procs? Feel free to mention tools you use also, but try to restrict it to open source and preferably Linux solutions if thats OK. b.t.w How many tables would a database have to be considered large in terms of design?

    Read the article

  • Pseudo-quicksort time complexity

    - by Ord
    I know that quicksort has O(n log n) average time complexity. A pseudo-quicksort (which is only a quicksort when you look at it from far enough away, with a suitably high level of abstraction) that is often used to demonstrate the conciseness of functional languages is as follows (given in Haskell): quicksort :: Ord a => [a] -> [a] quicksort [] = [] quicksort (p:xs) = quicksort [y | y<-xs, y<p] ++ [p] ++ quicksort [y | y<-xs, y>=p] Okay, so I know this thing has problems. The biggest problem with this is that it does not sort in place, which is normally a big advantage of quicksort. Even if that didn't matter, it would still take longer than a typical quicksort because it has to do two passes of the list when it partitions it, and it does costly append operations to splice it back together afterwards. Further, the choice of the first element as the pivot is not the best choice. But even considering all of that, isn't the average time complexity of this quicksort the same as the standard quicksort? Namely, O(n log n)? Because the appends and the partition still have linear time complexity, even if they are inefficient.

    Read the article

  • Partioning with Hibernate

    - by Alex
    Hello, We have a requirement to delete data in the range of 200K from database everyday. Our application is Java/JEE based using Oracle DB and Hibernate ORM tool. We explored various options like Hibernate batch processing Stored procedure Database partitioning Our DBA suggests database partitioning is the best way to go, so we can easily recreate and drop the partitioned table everyday. Now the issue is we have 2 kinds of data, one which we want to delete everyday and the other which we want to keep it. Suppose this data is stored in table "Trade". Now with partitioning, we have 2 tables "Trade". We have already existing Hibernate based DAO layer to fetch/store trades from/to DB. When we decide to partition the database, how can we control the trades to go in which of the two tables through hibernate. Basically I want , the trades need to be deleted by end of the day, to go in partitioned table and the trades I want to keep, in main table. Please suggest how can this be possible with Hibernate. We may add an additional column to identify the trades to be deleted but how can we ensure these trades should go to partitioned trade table using hibernate. I would appreciate if someone can suggest any better approach in case we are on wrong path.

    Read the article

  • How should I configure grub for booting linux kernel from a USB hard drive?

    - by skolima
    I have a laptop hard drive in an external enclosure which I use as a large pendrive. For an added twist, I have installed Linux on it, so I can boot any machine with my distribution of choice (e.g. for data recovery or repairing a b0rked system or just using a borrowed laptop without destroying the preinstalled Windows). The problem is that, depending on the hardware configuration, the USB hard drive may be visible under different paths. For grub configuration I just use (hda0,0) as it is relative to the device the grub was launched from. I have UUID entries in /etc/fstab. I also specify rootwait in the kernel parameters so that it waits for the USB subsystem to settle down before trying to mount the device. What should I pass to the kernel as root= ? Currently boot from the pendrive once, check the debug messages to see what /dev/sdX device has been assigned to the USB drive by the kernel, then reboot and edit the grub configuration. I can't change anything on the PC besides enabling Boot from USB hard drive in BIOS and setting it to higher priority than internal hard drives. There are various initrd generating scripts which include support for UUID in root device path, unfortunately the Gentoo native one (genkernel) does not support rootwait and I had no luck trying to use others. The boot process goes like this (it is quite similar in Windows): The BIOS chooses the boot device and loads whatever is its MBR (which happens to be grub stage-1). Grub loads it's configuration and stage-2 files from device it has set as root, using (hd0) for the device it was loaded from by BIOS. Grub loads and starts a kernel (still the same numbering, so I can use (hd0,0) again ). Kernel initializes all built-in devices (rootwait does it's magic now). Kernel mounts the partition it was passed as root (this is a kernel parameter, not grub parameter). init.d starts the userland booting process, including mounting things from /etc/fstab. Part 5 is the one giving me problems.

    Read the article

  • building a gl3 app under cygwin

    - by user445264
    i've got a small opengl 3.2 app that i've been developing on linux using the standard gnu tools (gmake/gcc). the code seems pretty portable--i had no problems running it on osx until i started using gl3 features that the mac mini gl drivers don't seem to support. i've got a bootcamp partition with windows xp on the same mini, and i'd like to run my app there if possible. the windows drivers definitely support gl 3.2, but i'm having trouble linking. this seems like a really common issue, but i haven't found any answers online that address using opengl 1.2 under cygwin. i'm using glew-1.5.5 and linking like so: g++ -o glToy *.o -L/cygdrive/c/Program\ Files/glew-1.5.5/lib -lglew32 -lglut32 -lglu32 -lopengl32 but i get a whole lot of this sort of output: Program.o:/home/Jacob/glToy/Program.cpp:134: undefined reference to `__imp____glewUseProgram' Program.o:/home/Jacob/glToy/Program.cpp:235: undefined reference to `__imp____glewActiveTexture' Program.o:/home/Jacob/glToy/Program.cpp:73: undefined reference to `__imp____glewGetShaderiv' ... any ideas what i'm doing wrong? or perhaps this isn't a workable setup? other ideas for getting this going on the mac mini (2009 version)? thanks!

    Read the article

  • algorithm - How to sort a 0/1 array with 2n/3 comparisons?

    - by Jackson Tale
    In Algorithm Design Manual, there is such an excise 4-26 Consider the problem of sorting a sequence of n 0’s and 1’s using comparisons. For each comparison of two values x and y, the algorithm learns which of x < y, x = y, or x y holds. (a) Give an algorithm to sort in n - 1 comparisons in the worst case. Show that your algorithm is optimal. (b) Give an algorithm to sort in 2n/3 comparisons in the average case (assuming each of the n inputs is 0 or 1 with equal probability). Show that your algorithm is optimal. For (a), I think it is fairly easy. I can choose a[n-1] as pivot, then do something like in quicksort partition, scan 0 to n - 2, find the middle point where left side is all 0 and right side is all 1, this take n - 1 comparisons. But for (b), I can't get a clue. It says "each of the n inputs is 0 or 1 with equal probability", so I guess I can assume the numbers of 0 and 1 equal? But how can I get a result which is related to 1/3? divide the whole array into 3 groups? Thanks

    Read the article

  • Strategies for dealing with Circular references caused by JPA relationships?

    - by ams
    I am trying to partition my application into modules by features packaged into separate jars such as feature-a.jar, feature-b.jar, ... etc. Individual feature jars such as feature-a.jar should contain all the code for a feature a including jpa entities, business logic, rest apis, unit tests, integration test ... etc. The problem I am running into is that bi-directional relationships between JPA entities cause circular references between the jar files. For example Customer entity should be in customer.jar and the Order should be in order.jar but Customer references order and order references customer making it hard to split them into separate jars / eclipse projects. Options I see for dealing with the circular dependencies in JPA entities: Option 1: Put all the entities into one jar / one project Option 2: Don't map certain bi-directianl relationships to avoid circular dependencies across projects. Questions: What rules / principles have you used to decide when to do bi-directional mapping vs. not? Have you been able to break jpa entities into their own projects / jar by features if so how did you avoid the circular dependencies issues?

    Read the article

  • Aggregating a list of dates to start and end date

    - by Joe Mako
    I have a list of dates and IDs, and I would like to roll them up into periods of consucitutive dates, within each ID. For a table with the columns "testid" and "pulldate" in a table called "data": | A79 | 2010-06-02 | | A79 | 2010-06-03 | | A79 | 2010-06-04 | | B72 | 2010-04-22 | | B72 | 2010-06-03 | | B72 | 2010-06-04 | | C94 | 2010-04-09 | | C94 | 2010-04-10 | | C94 | 2010-04-11 | | C94 | 2010-04-12 | | C94 | 2010-04-13 | | C94 | 2010-04-14 | | C94 | 2010-06-02 | | C94 | 2010-06-03 | | C94 | 2010-06-04 | I want to generate a table with the columns "testid", "group", "start_date", "end_date": | A79 | 1 | 2010-06-02 | 2010-06-04 | | B72 | 2 | 2010-04-22 | 2010-04-22 | | B72 | 3 | 2010-06-03 | 2010-06-04 | | C94 | 4 | 2010-04-09 | 2010-04-14 | | C94 | 5 | 2010-06-02 | 2010-06-04 | This is the the code I came up with: SELECT t2.testid, t2.group, MIN(t2.pulldate) AS start_date, MAX(t2.pulldate) AS end_date FROM(SELECT t1.pulldate, t1.testid, SUM(t1.check) OVER (ORDER BY t1.testid,t1.pulldate) AS group FROM(SELECT data.pulldate, data.testid, CASE WHEN data.testid=LAG(data.testid,1) OVER (ORDER BY data.testid,data.pulldate) AND data.pulldate=date (LAG(data.pulldate,1) OVER (PARTITION BY data.testid ORDER BY data.pulldate)) + integer '1' THEN 0 ELSE 1 END AS check FROM data ORDER BY data.testid, data.pulldate) AS t1) AS t2 GROUP BY t2.testid,t2.group ORDER BY t2.group; I use the use the LAG windowing function to compare each row to the previous, putting a 1 if I need to increment to start a new group, I then do a running sum of that column, and then aggregate to the combinations of "group" and "testid". Is there a better way to accomplish my goal, or does this operation have a name? I am using PostgreSQL 8.4

    Read the article

  • How to duplicate all data in a table except for a single column that should be changed.

    - by twiga
    I have a question regarding a unified insert query against tables with different data structures (Oracle). Let me elaborate with an example: tb_customers ( id NUMBER(3), name VARCHAR2(40), archive_id NUMBER(3) ) tb_suppliers ( id NUMBER(3), name VARCHAR2(40), contact VARCHAR2(40), xxx, xxx, archive_id NUMBER(3) ) The only column that is present in all tables is [archive_id]. The plan is to create a new archive of the dataset by copying (duplicating) all records to a different database partition and incrementing the archive_id for those records accordingly. [archive_id] is always part of the primary key. My problem is with select statements to do the actual duplication of the data. Because the columns are variable, I am struggling to come up with a unified select statement that will copy the data and update the archive_id. One solution (that works), is to iterate over all the tables in a stored procedure and do a: CREATE TABLE temp as (SELECT * from ORIGINAL_TABLE); UPDATE temp SET archive_id=something; INSERT INTO temp (select * from temp); DROP TABLE temp; I do not like this solution very much as the DDL commands muck up all restore points. Does anyone else have any solution?

    Read the article

  • Efficiently select top row for each category in the set

    - by VladV
    I need to select a top row for each category from a known set (somewhat similar to this question). The problem is, how to make this query efficient on the large number of rows. For example, let's create a table that stores temperature recording in several places. CREATE TABLE #t ( placeId int, ts datetime, temp int, PRIMARY KEY (ts, placeId) ) -- insert some sample data SET NOCOUNT ON DECLARE @n int, @ts datetime SELECT @n = 1000, @ts = '2000-01-01' WHILE (@n>0) BEGIN INSERT INTO #t VALUES (@n % 10, @ts, @n % 37) IF (@n % 10 = 0) SET @ts = DATEADD(hour, 1, @ts) SET @n = @n - 1 END Now I need to get the latest recording for each of the places 1, 2, 3. This way is efficient, but doesn't scale well (and looks dirty). SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 1 ORDER BY ts DESC ) t1 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 2 ORDER BY ts DESC ) t2 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 3 ORDER BY ts DESC ) t3 The following looks better but works much less efficiently (30% vs 70% according to the optimizer). SELECT placeId, ts, temp FROM ( SELECT placeId, ts, temp, ROW_NUMBER() OVER (PARTITION BY placeId ORDER BY ts DESC) rownum FROM #t WHERE placeId IN (1, 2, 3) ) t WHERE rownum = 1 The problem is, during the latter query execution plan a clustered index scan is performed on #t and 300 rows are retrieved, sorted, numbered, and then filtered, leaving only 3 rows. For the former query three times one row is fetched. Is there a way to perform the query efficiently without lots of unions?

    Read the article

  • How to write these two queries for a simple data warehouse, using ANSI SQL?

    - by morpheous
    I am writing a simple data warehouse that will allow me to query the table to observe periodic (say weekly) changes in data, as well as changes in the change of the data (e.g. week to week change in the weekly sale amount). For the purposes of simplicity, I will present very simplified (almost trivialized) versions of the tables I am using here. The sales data table is a view and has the following structure: CREATE TABLE sales_data ( sales_time date NOT NULL, sales_amt double NOT NULL ) For the purpose of this question. I have left out other fields you would expect to see - like product_id, sales_person_id etc, etc, as they have no direct relevance to this question. AFAICT, the only fields that will be used in the query are the sales_time and the sales_amt fields (unless I am mistaken). I also have a date dimension table with the following structure: CREATE TABLE date_dimension ( id integer NOT NULL, datestamp date NOT NULL, day_part integer NOT NULL, week_part integer NOT NULL, month_part integer NOT NULL, qtr_part integer NOT NULL, year_part integer NOT NULL, ); which partition dates into reporting ranges. I need to write queries that will allow me to do the following: Return the change in week on week sales_amt for a specified period. For example the change between sales today and sales N days ago - where N is a positive integer (N == 7 in this case). Return the change in change of sales_amt for a specified period. For in (1). we calculated the week on week change. Now we want to know how that change is differs from the (week on week) change calculated last week. I am stuck however at this point, as SQL is my weakest skill. I would be grateful if an SQL master can explain how I can write these queries in a DB agnostic way (i.e. using ANSI SQL).

    Read the article

  • My PHP code is commented out

    - by Mr A
    Before everything happened, I was running this wordpress install for developing themes using xampp. But I decided to upgrade the memory of this machine from 2GB to 6GB since I need extra room for applications. I back-up my code in a separate partition by copying them. Since I have a 32bit OS at the time, I format the computer and installed a 64bit version. All is well and fine the OS side. When I setup my web dev environment something goes wrong. When I imported my htdocs back, first by just fully copying them to a new fresh install of xampp and notice that all of the codes that put are not working. My CI code is displaying my PHP code in the browser. My theme in wordpress is also commenting out my PHP code when I view the source. The themes included in my fresh wordpress install works so there's something I am missing here. From the looks of it, the php is being executed properly since anything that I install works. Just that the ones that came from a previous xampp is not.

    Read the article

  • How to make ActiveRecord work with legacy partitioned/sharded databases/tables?

    - by Utensil
    thanks for your time first...after all the searching on google, github and here, and got more confused about the big words(partition/shard/fedorate),I figure that I have to describe the specific problem I met and ask around. My company's databases deals with massive users and orders, so we split databases and tables in various ways, some are described below: way database and table name shard by (maybe it's should be called partitioned by?) YZ.X db_YZ.tb_X order serial number last three digits YYYYMMDD. db_YYYYMMDD.tb date YYYYMM.DD db_YYYYMM.tb_ DD date too The basic concept is that databases and tables are seperated acording to a field(not nessissarily the primary key), and there are too many databases and too many tables, so that writing or magically generate one database.yml config for each database and one model for each table isn't possible or at least not the best solution. I looked into drnic's magic solutions, and datafabric, and even the source code of active record, maybe I could use ERB to generate database.yml and do database connection in around filter, and maybe I could use named_scope to dynamically decide the table name for find, but update/create opertions are bounded to "self.class.quoted_table_name" so that I couldn't easily get my problem solved. And even I could generate one model for each table, because its amount is up to 30 most. But this is just not DRY! What I need is a clean solution like the following DSL: class Order < ActiveRecord::Base shard_by :order_serialno do |key| [get_db_config_by(key), #because some or all of the databaes might share the same machine in a regular way or can be configed by a hash of regex, and it can also be a const get_db_name_by(key), get_tb_name_by(key), ] end end Can anybody enlight me? Any help would be greatly appreciated~~~~

    Read the article

  • Multithreading for loop while maintaining order

    - by David
    I started messing around with multithreading for a CPU intensive batch process I'm running. Essentially I'm trying to condense multiple single page tiffs into single PDF documents. This works fine with a foreach loop or standard iteration but can be very slow for several 100 page documents. I tried the following based on a some examples I found to use multithreading and it has significant performance improvements however it obliterates the page order instead of 1,2,3,4 it will be 1,3,4,2,6,5 on what thread completes first. My question is how would I utilize this technique while maintaining the page order and if I can will it negate the performance benefit of the multithreading? Thank you in advance. PdfDocument doc = new PdfDocument(); string mail = textBox1.Text; string[] split = mail.Split(new string[] { Environment.NewLine }, StringSplitOptions.None); int counter = split.Count(); // Source must be array or IList. var source = Enumerable.Range(0, 100000).ToArray(); // Partition the entire source array. var rangePartitioner = Partitioner.Create(0, counter); double[] results = new double[counter]; // Loop over the partitions in parallel. Parallel.ForEach(rangePartitioner, (range, loopState) => { // Loop over each range element without a delegate invocation. for (int i = range.Item1; i < range.Item2; i++) { f_prime = split[i].Replace(" " , ""); PdfPage page = doc.AddPage(); XGraphics gfx = XGraphics.FromPdfPage(page); XImage image = XImage.FromFile(f_prime); double x = 0; gfx.DrawImage(image, x, 0); } });

    Read the article

  • Top n items in a List ( including duplicates )

    - by Krishnan
    Trying to find an efficient way to obtain the top N items in a very large list, possibly containing duplicates. I first tried sorting & slicing, which works. But this seems unnnecessary. You shouldn't need to sort a very large list if you just want the top 20 members. So I wrote a recursive routine which builds the top-n list. This also works, but is very much slower than the non-recursive one! Question: Which is my second routine (elite2) so much slower than elite, and how do I make it faster ? My code is attached below. Thanks. import scala.collection.SeqView import scala.math.min object X { def elite(s: SeqView[Int, List[Int]], k:Int):List[Int] = { s.sorted.reverse.force.slice(0,min(k,s.size)) } def elite2(s: SeqView[Int, List[Int]], k:Int, s2:List[Int]=Nil):List[Int] = { if( k == 0 || s.size == 0) s2.reverse else { val m = s.max val parts = s.force.partition(_==m) val whole = if( parts._1.size > 1) parts._1.tail:::parts._2 else parts._2 elite2( whole.view, k-1, m::s2 ) } } def main(args:Array[String]) = { val N = 1000000/3 val x = List(N to 1 by -1).flatten.map(x=>List(x,x,x)).flatten.view println(elite2(x,20)) println(elite(x,20)) } }

    Read the article

  • Redundancy in doing sum()

    - by Abhi
    table1 - id, time_stamp, value This table consists of 10 id's. Each id would be having a value for each hour in a day. So for 1 day, there would be 240 records in this table. table2 - id Table2 consists of a dynamically changing subset of id's present in table1. At a particular instance, the intention is to get sum(value) from table1, considering id's only in table2, grouping by each hour in that day, giving the summarized values a rank and repeating this each day. the query is at this stage: select time_stamp, sum(value), rank() over (partition by trunc(time_stamp) order by sum(value) desc) rn from table1 where exists (select t2.id from table2 t2 where id=t2.id) and time_stamp >= to_date('05/04/2010 00','dd/mm/yyyy hh24') and time_stamp <= to_date('25/04/2010 23','dd/mm/yyyy hh24') group by time_stamp order by time_stamp asc If the query is correct, can this be made more efficient, considering that, table1 will actually consist of thousand's of id's instead of 10 ? EDIT: I am using sum(value) 2 times in the query, which I am not able to get a workaround such that the sum() is done only once. Pls help on this

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >