Search Results

Search found 5849 results on 234 pages for 'partition scheme'.

Page 202/234 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Surrogate key for date dimension?

    - by Navin
    There are 2 school of thoughts : Use surrogate key preferbly in the format of YYYYMMDD as this will always be sequential. Eliminate Date dimension surrogate key and use actual date instead. My Questions to experts on dimension modeling are : 1> Which design would you prefer and why ? 2> How should we handle unknown values in each of the cases, Can we simply place NULL in Fact table for unknown dates as Foreign Key can be NULL (if no why)? 3> If we need to partition fact table on date column ,how would we achieve that in case 1. I am inclined towards using actual date and using NULL to represent UNKNOWN dates in fact table , as date related validation on fact can be done without need to look in to dimension table.

    Read the article

  • IndexOutofRangeException while using WriteLine in nested Parallel.For loops

    - by Umar Asif
    I am trying to write kinect depth data to a text file using nested Parallel.For loops with the following code. However, it gives IndexOutofRangeException. The code works perfect if using simple for loops but it hangs the UI since the depth format is set to 640x480 causing the loops to write 307200 lines in the text file at 30fps. Therefore, I switched to Parallel. For scheme. If I omit the writeLine command from the nested loops, the code works fine, which indicates that the IndexOutofRangeException is arising at the writeline command. I do not know how to troubleshoot this. Please advise. Any better workarounds to avoid UI freezing? Thanks. using (DepthImageFrame depthImageframe = d.OpenDepthImageFrame()) { if (depthImageframe == null) return; depthImageframe.CopyPixelDataTo(depthPixelData); swDepth = new StreamWriter(@"E:\depthData.txt", false); int i = 0; Parallel.For(0, depthImageframe.Width, delegate(int x) { Parallel.For(0, depthImageframe.Height, delegate(int y) { p[i] = sensor.MapDepthToSkeletonPoint(depthImageframe.Format, x, y, depthPixelData[x + depthImageframe.Width * y]); swDepth.WriteLine(i + "," + p[k].X + "," + p[k].Y + "," + p[k].Z); i++; }); }); swDepth.Close(); } }

    Read the article

  • are projects with high developer turn over rate really a bad thing?

    - by John
    I've inherited a lot of web projects that experienced high developer turn over rates. Sometimes these web projects are a horrible patchwork of band aid solutions. Other times they can be somewhat maintainable mozaics of half-done features each built with a different architectural style. Everytime I inherit these projects, I wish the previous developers could explain to me why things got so bad. What puzzles me is the reaction of the owners (either a manager, a middle man company, or a client). They seem to think, "Well, if you leave, I'll just find another developer." Or they think, "Oh, it costs that much money to refactor the system? I know another developer who can do it at half the price. I'll hire him if I can't afford you." I'm guessing that the high developer turn over rate is related to the owner's mentality of "If you think it's a bad idea to build this, I'll just find another (possibly cheaper) developer to do what I want". For the owners, the approach seems to work because their business is thriving. Unfortunately, it's no fun for the developers that go AWOL 3-4 months after working with poor code, strict timelines, and little feedback. So my question is the following: Are the following symptoms of a project really such a bad thing for business? high developer turn over rate poorly built technology - often a patchwork of different and inappropriately used architectural styles owners without a clear roadmap for their web project, and they request features on a whim I've seen numerous businesses prosper while experiencing the symptoms above. So as a programmer, even though my instincts tell me the above points are terrible, I'm forced to take a step back and ask, "are things really that bad in the grand scheme of things?" If not, I will re-evaluate my approach to these projects.

    Read the article

  • substitution cypher with different alphabet length

    - by seanizer
    I would like to implement a simple substitution cypher to mask private ids in URLs I know how my IDs will look like (combination of upperchase ascii, digits and underscore), and they will be rather long, as they are composed keys. I would like to use a longer alphabet to shorten the resulting codes (I'd like to use upper and lower case ascii letters, digits and nothing else). So my incoming alphabet would be [A-Z0-9_] (37 chars) and my outgoing alphabet would be [A-Za-z0-9] (62 chars) so a compression of almost 50% would be available. let's say my URLs look like this: /my/page/GFZHFFFZFZTFZTF_24_F34 and I want them to look like this instead: /my/page/Ft32zfegZFV5 Obviously both arrays would be shuffled to bring some random order in. This does not have to be secure. if someone figures it out: fine, but I don't want the scheme to be obvious. My desired solution would be to convert the string to an integer representation of radix 37, convert the radix to 62 and use the second alphabet to write out that number. is there any sample code available that does something similar? Integer.parseInt ( http://java.sun.com/javase/6/docs/api/java/lang/Integer.html#parseInt%28java.lang.String,%20int%29 ) has some similar logic, but it is hard-coded to use standard digit behavior Any hints? I am using java to implement this but code or pseudo-code in any other language is of course also helpful

    Read the article

  • C Run-Time library part 2

    - by b-gen-jack-o-neill
    Hi, I was suggested when I have some further questions on my older ones, to create newer Question and reffer to old one. So, this is the original question: What is the C runtime library? OK, from your answers, I now get thet statically linked libraries are Microsoft implementation of C standart functions. Now: If I get it right, the scheme would be as follow: I want to use printf(), so I must include which just tels compiler there us functio printf() with these parameters. Now, when I compile code, becouse printf() is defined in C Standart Library, and becouse Microsoft decided to name it C Run Time library, it gets automatically statically linked from libcmt.lib (if libcmt.lib is set in compiler) at compile time. I ask, becouse on wikipedia, in article about runtime library there is that runtime library is linked in runtime, but .lib files are linked at compile time, am I right? Now, what confuses me. There is .dll version of C standart library. But I thought that to link .dll file, you must actually call winapi program to load that library. So, how can be these functions dynamically linked, if there is no static library to provide code to tell Windows to load desired functions from dll? And really last question on this subject - are C Standart library functions also calls to winapi even they are not .dll files like more advanced WinAPI functions? I mean, in the end to access framebuffer and print something you must tell Windows to do it, since OS cannot let you directly manipulate HW. I think of it like the OS must be written to support all C standart library functions same way across similiar versions, since they are statically linked, and can differently support more complex WinAPI calls becouse new version of OS can have adjustements in the .dll file.

    Read the article

  • Moving inserted container element if possible

    - by doublep
    I'm trying to achieve the following optimization in my container library: when inserting an lvalue-referenced element, copy it to internal storage; but when inserting rvalue-referenced element, move it if supported. The optimization is supposed to be useful e.g. if contained element type is something like std::vector, where moving if possible would give substantial speedup. However, so far I was unable to devise any working scheme for this. My container is quite complicated, so I can't just duplicate insert() code several times: it is large. I want to keep all "real" code in some inner helper, say do_insert() (may be templated) and various insert()-like functions would just call that with different arguments. My best bet code for this (a prototype, of course, without doing anything real): #include <iostream> #include <utility> struct element { element () { }; element (element&&) { std::cerr << "moving\n"; } }; struct container { void insert (const element& value) { do_insert (value); } void insert (element&& value) { do_insert (std::move (value)); } private: template <typename Arg> void do_insert (Arg arg) { element x (arg); } }; int main () { { // Shouldn't move. container c; element x; c.insert (x); } { // Should move. container c; c.insert (element ()); } } However, this doesn't work at least with GCC 4.4 and 4.5: it never prints "moving" on stderr. Or is what I want impossible to achieve and that's why emplace()-like functions exist in the first place?

    Read the article

  • Password hashing, salt and storage of hashed values

    - by Jonathan Leffler
    Suppose you were at liberty to decide how hashed passwords were to be stored in a DBMS. Are there obvious weaknesses in a scheme like this one? To create the hash value stored in the DBMS, take: A value that is unique to the DBMS server instance as part of the salt, And the username as a second part of the salt, And create the concatenation of the salt with the actual password, And hash the whole string using the SHA-256 algorithm, And store the result in the DBMS. This would mean that anyone wanting to come up with a collision should have to do the work separately for each user name and each DBMS server instance separately. I'd plan to keep the actual hash mechanism somewhat flexible to allow for the use of the new NIST standard hash algorithm (SHA-3) that is still being worked on. The 'value that is unique to the DBMS server instance' need not be secret - though it wouldn't be divulged casually. The intention is to ensure that if someone uses the same password in different DBMS server instances, the recorded hashes would be different. Likewise, the user name would not be secret - just the password proper. Would there be any advantage to having the password first and the user name and 'unique value' second, or any other permutation of the three sources of data? Or what about interleaving the strings? Do I need to add (and record) a random salt value (per password) as well as the information above? (Advantage: the user can re-use a password and still, probably, get a different hash recorded in the database. Disadvantage: the salt has to be recorded. I suspect the advantage considerably outweighs the disadvantage.) There are quite a lot of related SO questions - this list is unlikely to be comprehensive: Encrypting/Hashing plain text passwords in database Secure hash and salt for PHP passwords The necessity of hiding the salt for a hash Clients-side MD5 hash with time salt Simple password encryption Salt generation and Open Source software I think that the answers to these questions support my algorithm (though if you simply use a random salt, then the 'unique value per server' and username components are less important).

    Read the article

  • Create a group indicator (SQL)

    - by user1723699
    I am looking to create a group indicator for a query using SQL (Oracle specifically). Basically, I am looking for duplicate entries for certain columns and while I can find those what I also want is some kind of indicator to say what rows the duplicates are from. Below is an example of what I am looking to do (looking for duplicates on Name, Zip, Phone). The rows with Name = aaa are all in the same group, bb are not, and c are. Is there even a way to do this? I was thinking something with OVER (PARTITION BY ... but I can't think of a way to only increment for each group. +----------+---------+-----------+------------+-----------+-----------+ | Name | Zip | Phone | Amount | Duplicate | Group | +----------+---------+-----------+------------+-----------+-----------+ | aaa | 1234 | 5555555 | 500 | X | 1 | | aaa | 1234 | 5555555 | 285 | X | 1 | | bb | 545 | 6666666 | 358 | | 2 | | bb | 686 | 7777777 | 898 | | 3 | | aaa | 1234 | 5555555 | 550 | X | 1 | | c | 5555 | 8888888 | 234 | X | 4 | | c | 5555 | 8888888 | 999 | X | 4 | | c | 5555 | 8888888 | 230 | X | 4 | +----------+---------+-----------+------------+-----------+-----------+

    Read the article

  • Clustered index - multi-part vs single-part index and effects of inserts/deletes

    - by Anssssss
    This question is about what happens with the reorganizing of data in a clustered index when an insert is done. I assume that it should be more expensive to do inserts on a table which has a clustered index than one that does not because reorganizing the data in a clustered index involves changing the physical layout of the data on the disk. I'm not sure how to phrase my question except through an example I came across at work. Assume there is a table (Junk) and there are two queries that are done on the table, the first query searches by Name and the second query searches by Name and Something. As I'm working on the database I discovered that the table has been created with two indexes, one to support each query, like so: --drop table Junk1 CREATE TABLE Junk1 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name ON Junk1 ( Name ) CREATE NONCLUSTERED INDEX IX_Name_Something ON Junk1 ( Name, Something ) Now when I looked at the two indexes, it seems that IX_Name is redundant since IX_Name_Something can be used by any query that desires to search by Name. So I would eliminate IX_Name and make IX_Name_Something the clustered index instead: --drop table Junk2 CREATE TABLE Junk2 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name_Something ON Junk2 ( Name, Something ) Someone suggested that the first indexing scheme should be kept since it would result in more efficient inserts/deletes (assume that there is no need to worry about updates for Name and Something). Would that make sense? I think the second indexing method would be better since it means one less index needs to be maintained. I would appreciate any insight into this specific example or directing me to more info on maintenance of clustered indexes.

    Read the article

  • Can't log in with a valid password using Authlogic and Ruby on Rails?

    - by kbighorse
    We support a bit of an unusual scheme. We don't require a password on User creation, and use password_resets to add a password to the user later, on demand. The problem is, once a password is created, the console indicates the password is valid: user.valid_password? 'test' = true but in my UserSessions controller, @user_session.save returns false using the same password. What am I not seeing? Kimball UPDATE: Providing more details, here is the output when saving the new password: Processing PasswordResetsController#update (for 127.0.0.1 at 2011-01-31 14:01:12) [PUT] Parameters: {"commit"="Update password", "action"="update", "_method"="put", "authenticity_token"="PQD4+eIREKBfHR3/fleWuQSEtZd7RIvl7khSYo5eXe0=", "id"="v3iWW5eD9P9frbEQDvxp", "controller"="password_resets", "user"={"password"="johnwayne"}} The applicable SQL is: UPDATE users SET updated_at = '2011-01-31 22:01:12', crypted_password = 'blah', perishable_token = 'blah', password_salt = 'blah', persistence_token = 'blah' WHERE id = 580 I don't see an error per se, @user_session.save just returns false, as if the password didn't match. I skip validating passwords in the User model: class User < ActiveRecord::Base acts_as_authentic do |c| c.validate_password_field = false end Here's the simplified controller code: def create logger.info("SAVED SESSION? #{@user_session.save}") end which outputs: Processing UserSessionsController#create (for 127.0.0.1 at 2011-01-31 14:16:59) [POST] Parameters: {"commit"="Login", "user_session"={"remember_me"="0", "password"="johnwayne", "email"="[email protected]"}, "action"="create", "authenticity_token"="PQD4+eIREKBfHR3/fleWuQSEtZd7RIvl7khSYo5eXe0=", "controller"="user_sessions"} User Columns (2.2ms) SHOW FIELDS FROM users User Load (3.7ms) SELECT * FROM users WHERE (users.email = '[email protected]') ORDER BY email ASC LIMIT 1 SAVED SESSION? false CACHE (0.0ms) SELECT * FROM users WHERE (users.email = '[email protected]') ORDER BY email ASC LIMIT 1 Redirected to http://localhost:3000/login Lastly, the console indicates that the new password is valid: $ u.valid_password? 'johnwayne' = true Would love to do it all in the console, is there a way to load UserSession controller and call methods directly? Kimball

    Read the article

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • Android : Handle OAuth callback using intent-filter

    - by Dave Allison
    I am building an Android application that requires OAuth. I have all the OAuth functionality working except for handling the callback from Yahoo. I have the following in my AndroidManifest.xml : <intent-filter> <action android:name="android.intent.action.VIEW"></action> <category android:name="android.intent.category.DEFAULT"></category> <category android:name="android.intent.category.BROWSABLE"></category> <data android:host="www.test.com" android:scheme="http"></data> </intent-filter> Where www.test.com will be substituted with a domain that I own. It seems : This filter is triggered when I click on a link on a page. It is not triggered on the redirect by Yahoo, the browser opens the website at www.test.com It is not triggered when I enter the domain name directly in the browser. So can anybody help me with When exactly this intent-filter will be triggered? Any changes to the intent-filter or permissions that will widen the filter to apply to redirect requests? Any other approaches I could use? Thanks for your help.

    Read the article

  • git-svn: reset tracking for master

    - by digitala
    I'm using git-svn to work with an SVN repository. My working copies have been created using git svn clone -s http://foo.bar/myproject so that my working copy follows the default directory scheme for SVN (trunk, tags, branches). Recently I've been working on a branch which was created using git-svn branch myremotebranch and checked-out using git checkout --track -b mybranch myremotebranch. I needed to work from multiple locations, so from the branch I git-svn dcommit-ed files to the SVN repository quite regularly. After finishing my changes, I switched back to the master and executed a merge, committed the merge, and tried to dcommit the successful merge to the remote trunk. It seems as though after the merge the remote tracking for the master has switched to the branch I was working on: # git checkout master # git merge mybranch ... (successful) # git add . # git commit -m '...' # git svn dcommit Committing to http://foo.bar/myproject/branches/myremotebranch ... # Is there a way I can update the master so that it's following remotes/trunk as before the merge? I'm using git 1.7.0.5, if that's any help.

    Read the article

  • Are functional programming languages good for practical tasks?

    - by Clueless
    It seems to me from my experimenting with Haskell, Erlang and Scheme that functional programming languages are a fantastic way to answer scientific questions. For example, taking a small set of data and performing some extensive analysis on it to return a significant answer. It's great for working through some tough Project Euler questions or trying out the Google Code Jam in an original way. At the same time it seems that by their very nature, they are more suited to finding analytical solutions than actually performing practical tasks. I noticed this most strongly in Haskell, where everything is evaluated lazily and your whole program boils down to one giant analytical solution for some given data that you either hard-code into the program or tack on messily through Haskell's limited IO capabilities. Basically, the tasks I would call 'practical' such as Aceept a request, find and process requested data, and return it formatted as needed seem to translate much more directly into procedural languages. The most luck I have had finding a functional language that works like this is Factor, which I would liken to a reverse-polish-notation version of Python. So I am just curious whether I have missed something in these languages or I am just way off the ball in how I ask this question. Does anyone have examples of functional languages that are great at performing practical tasks or practical tasks that are best performed by functional languages?

    Read the article

  • How to scale a PHP application (servers, mysql, memcache)

    - by Stéphane Goetz
    Hi, I'm currently creating a website for a social project in switzerland. And before there is an overflow of user, I want to prepare the application to scale. I answered by myself many questions but some are left. I explain what I want to do. First at the beginnning, the Application will have only one server (short time) with DNS, PHP, Mysql, Data, and memcache. Second Then I will split them in two DNS, Mysql, memcache Data, PHP Third Here is the problem, I don't know how to do it exactly here to keep the application running well. I could do : Front : Load Balancer, memcache, DNS Web 1 : PHP, DATA Web 2 : PHP, DATA Mysql This would be the scheme, all PHP sessions are kept in the DB. BUT, how do I sync the data? do I run a Rsync to keep them up to date. do I put them on a separate disk (network disk) to be sure ? but in this case, how can I do in case of user uploads ? and if the website gets more success and we have to go on greater structures, would'nt it create some latency on updates ? or would it be a good thing to go directly to amazon's web services ? some infos I use codeigniter as Framework. I use linux as webserver (distribution not chosen now, but should be Debian) Thanks in advance for your answers.

    Read the article

  • What is a .NET managed module?

    - by Abhijeet Patel
    I know it's a Windows PE32, but I also know that the unit of deployment in .NET is an assembly which in turn has a manifest and can be made up of multiple managed modules. My questions are : 1) How would you create multiple managed modules when building a project such as a class lib or a console app etc. 2) Is there a way to specify this to the compiler(via the project properties for example) to partition your source code files into multiple managed modules. If so what is the benefit of doing so? 3)Can managed modules span assemblies? 4)Are separate file created on disk when the source code is compiled or are these created in memory and directly embedded in an assembly?

    Read the article

  • UIViewController memory management

    - by jAmi
    Hi I have a very basic issue of memory management with my UIViewController (or any other object that I create); The problem is that in Instruments my Object allocation graph is always rising even though I am calling release on then assigning them nil. I have 2 UIViewController sub-classes each initializing with a NIB; I add the first ViewController to the main window like [window addSubView:first.view]; Then in my first ViewController nib file I have a Button which loads the second ViewController like : -(IBAction)loadSecondView{ if(second!=nil){ //second is set as an iVar and @property (nonatomic, retain)ViewController2* sceond; [second release]; second=nil; } second=[[ViewController2* second]initWithNibName:@"ViewController2" bundle:nil]; [self.view addSubView:second.view]; } In my (second) ViewController2 i have a button with an action method -(IBAction) removeSecond{ [self.view removeFromSuperView]; } Please let me know if the above scheme works in a managed way for memory...? In Instruments It does not show release of any allocation and keeps the bar status graph keeps on rising.

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • UBUNTU.. GRUB problem

    - by pravinp
    Hi, I have/(had) windows xp on one partition of my hdd. on second i installed ubuntu 9.10 yesterday. and after reboot of windows xp i get error "GRUB LOADING.". now i know that you can play around with that using live GRUB and all. putting GRUB on top of MBR or etc etc. something like that. i want to know - if i reinstall Ubuntu 10.04 which is available now - will it give me still the same error? or i will be able to use both OS? (i.e. is that problem solved in version 10.04 or still there? and if it is solved reinstallation of ubuntu will solve the problem or not?) anyhelp is welcome. Thanks

    Read the article

  • How can I make an event created through Google Calendar's API send an invitation email?

    - by Cebjyre
    I'm trying to create an event through the API and it is mostly working, with the exception that while the new events are being created in the invitees calendars, no emails are being sent. Creating the event from the web interface is pushing the event through, as well as sending the email (except one account that doesn't get any notifications at all, but that's not relevant to my current problem). The event I am trying to push in is: <entry xmlns='http://www.w3.org/2005/Atom' xmlns:gd='http://schemas.google.com/g/2005'> <category scheme='http://schemas.google.com/g/2005#kind' term='http://schemas.google.com/g/2005#event'></category> <title type='text'>test event</title> <content type='text'>content.</content> <gd:transparency value='http://schemas.google.com/g/2005#event.opaque'> </gd:transparency> <gd:eventStatus value='http://schemas.google.com/g/2005#event.confirmed'> </gd:eventStatus> <gd:where valueString='somewhere'></gd:where> <gd:who email="[redacted]" rel='http://schemas.google.com/g/2005#event.attendee' valueString='Me'><gd:attendeeStatus value='http://schemas.google.com/g/2005#event.invited'/></gd:who> <gd:who email="[redacted again]" rel='http://schemas.google.com/g/2005#event.organizer' valueString='Also Me'><gd:attendeeStatus value='http://schemas.google.com/g/2005#event.accepted'/></gd:who> <gd:when startTime='2010-05-18T15:30:00.000+10:00' endTime='2010-05-18T16:00:00.000+10:00'></gd:when> </entry> And when I request event lists I can't see any large difference between events created through the API and through the web interface.

    Read the article

  • Autmatically create table on MySQL server based on date?

    - by Anthony
    Is there an equivalent to cron for MySQL? I have a PHP script that queries a table based on the month and year, like: SELECT * FROM data_2010_1 What I have been doing until now is, every time the script executes it does a query for the table, and if it exists, does the work, if it doesn't it creates the table. I was wondering if I can just set something up on the MySQL server itself that will create the table (based on a default table) at the stroke of midnight on the first of the month. Update Based on the comments I've gotten, I'm thinking this isn't the best way to achieve my goal. So here's two more questions: If I have a table with thousands of rows added monthly, is this potentially a drag on resources? If so, what is the best way to partition this table, since the above is verboten? What are the potential problems with my home-grown method I originally thought up?

    Read the article

  • C# Breakpoint Weirdness

    - by Dan
    In my program I've got two data files A and B. The data in A is static and the data in B refers back to the data in A. In order to make sure the data in B is invalidated when A is changed, I keep an identifier for each of the links which is a long byte-string identifying the data. I get this string using BitConverter on some of the important properties. My problem is that this scheme isn't working. I save the identifiers initially, and with I reload (with the exact same data in A) the identifiers don't match anymore. It seems the bit converter gives different results when I go to save. The really weird thing about it is, if I place a breakpoint in the save code, I can see the identifier it's writing to the file is fine, and the next load works. If I don't place a breakpoint and say print the identifiers to console instead, they're totally different. It's like when my program is running at full-speed the CPU messes up some instructions. This isn't the first time something like this happens to me. I've seen it in other projects. What gives? Has anyone every experienced this kind of debugging weirdness? I can't explain how stopping the program and not stopping it can change the output. Also, it's not a hardware problem because this happens on my laptop as well.

    Read the article

  • Is there an implementation of rapid concurrent syntactical sugar in scala? eg. map-reduce

    - by TiansHUo
    Passing messages around with actors is great. But I would like to have even easier code. Examples (Pseudo-code) val splicedList:List[List[Int]]=biglist.partition(100) val sum:Int=ActorPool.numberOfActors(5).getAllResults(splicedList,foldLeft(_+_)) where spliceIntoParts turns one big list into 100 small lists the numberofactors part, creates a pool which uses 5 actors and receives new jobs after a job is finished and getallresults uses a method on a list. all this done with messages passing in the background. where maybe getFirstResult, calculates the first result, and stops all other threads (like cracking a password)

    Read the article

  • XCode 4.4 bundle version updates not picked up until subsequent build

    - by Mark Struzinski
    I'm probably missing something simple here. I am trying to auto increment my build number in XCode 4.4 only when archiving my application (in preparation for a TestFlight deployment). I have a working shell script that runs on the target and successfully updates the info.plist file for each build. My build configuration for archiving is name 'Ad-Hoc'. Here is the script: if [ $CONFIGURATION == Ad-Hoc ]; then echo "Ad-Hoc build. Bumping build#..." plist=${PROJECT_DIR}/${INFOPLIST_FILE} buildnum=$(/usr/libexec/PlistBuddy -c "Print CFBundleVersion" "${plist}") if [[ "${buildnum}" == "" ]]; then echo "No build number in $plist" exit 2 fi buildnum=$(expr $buildnum + 1) /usr/libexec/Plistbuddy -c "Set CFBundleVersion $buildnum" "${plist}" echo "Bumped build number to $buildnum" else echo $CONFIGURATION " build - Not bumping build number." fi This script updates the plist file appropriately and is reflected in XCode each time I archive. The problem is that the .ipa file that comes out of the archive process is still showing the previous build number. I have tried the following solutions with no success: Clean before build Clean build folder before build Move Run Script phase to directly after the Target Dependencies step in Build Phases Adding the script as a Run Script action in my scheme as a pre-action No matter what I do, when I look at the build log, I see that the info.plist file is being processed as one of the very first steps. It is always prior to my script running and updating the build number, which is, I assume, why the build number is never current in the .ipa file. Is there a way to force the Run Script phase to run before the info.plist file is processed?

    Read the article

  • Why does my javascript file sometimes compressed while sometimes not?(IIS Gzip problem)

    - by Kevin Yang
    i enable gzip for javascript file in my iis settings, here 's the corresponding config section. <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="10" dynamicCompressionLevel="8" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/soap+msbin1" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> currently, when i download my js file, it seems that sometimes server return the gzip one, and sometimes not. i dont know why, and how to debug that. If a file is already gzipped, it should be cached in local disk, and next time someone visit that file again, iis kernel should return the cache gzip file directly without compressing it again. Is that right?

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >