Search Results

Search found 12211 results on 489 pages for 'industry standard'.

Page 407/489 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • recommended format to save time with MJD + BCD format in database

    - by pierr
    Hi, There is a time represented in MJD and BCD format with 5 bytes .I am wondering what is the recommended format to save this date-time in the sqlite database so that user can search against it ? My first attempt is to save it just as it is, that is a 5 bytes string. The user will use the same format to search and the result will be converted to unix time by the user with following code. However, later, I was suggested to save the time in the integer - the UTC time, for example. But I can not find a standard way to do the conversion. I feel this is a common issue and would like to hear your comments. time_t sidate_to_unixtime(unsigned char sidate[]) { int k = 0; struct tm tm; double mjd; /* check for the undefined value */ if ((sidate[0] == 0xff) && (sidate[1] == 0xff) && (sidate[2] == 0xff) && (sidate[3] == 0xff) && (sidate[4] == 0xff)) { return -1; } memset(&tm, 0, sizeof(tm)); mjd = (sidate[0] << 8) | sidate[1]; tm.tm_year = (int) ((mjd - 15078.2) / 365.25); tm.tm_mon = (int) (((mjd - 14956.1) - (int) (tm.tm_year * 365.25)) / 30.6001); tm.tm_mday = (int) mjd - 14956 - (int) (tm.tm_year * 365.25) - (int) (tm.tm_mon * 30.6001); if ((tm.tm_mon == 14) || (tm.tm_mon == 15)) k = 1; tm.tm_year += k; tm.tm_mon = tm.tm_mon - 2 - k * 12; tm.tm_sec = bcd_to_integer(sidate[4]); tm.tm_min = bcd_to_integer(sidate[3]); tm.tm_hour = bcd_to_integer(sidate[2]); return mktime(&tm); }

    Read the article

  • Parse usable Street Address, City, State, Zip from a string

    - by Rob Allen
    Problem: I have an address field from an Access database which has been converted to Sql Server 2005. This field has everything all in one field. I need to parse out the individual sections of the address into their appropriate fields in a normalized table. I need to do this for approximately 4,000 records and it needs to be repeatable. Here are the rules for this exercise: 1 - no whining about how this should have been separate fields in the first place, we are often confronted with less than ideal situations and have to make the best of them 2- for this post, use any language you want 3- feel free to play code golf 4 - Assume an address in the US (for now) 5 - assume that the input string will sometimes contain an addressee (the person being addressed) and/or a second street address (i.e. Suite B) 6 - states may be abbreviated 7 - zip code could be standard 5 digit or zip+4 8 - there are typos in some instances UPDATE: In response to the questions posed, standards were not universally followed, I need need to store the individual values, not just geocode and errors means typo (corrected above) Sample Data: A. P. Croll & Son 2299 Lewes-Georgetown Hwy, Georgetown, DE 19947 11522 Shawnee Road, Greenwood DE 19950 144 Kings Highway, S.W. Dover, DE 19901 Intergrated Const. Services 2 Penns Way Suite 405 New Castle, DE 19720 Humes Realty 33 Bridle Ridge Court, Lewes, DE 19958 Nichols Excavation 2742 Pulaski Hwy Newark, DE 19711 2284 Bryn Zion Road, Smyrna, DE 19904 VEI Dover Crossroads, LLC 1500 Serpentine Road, Suite 100 Baltimore MD 21 580 North Dupont Highway Dover, DE 19901 P.O. Box 778 Dover, DE 19903

    Read the article

  • Spring 3.0 vs J2EE 6.0

    - by StudiousJoseph
    Hi everybody, I'm confronted with a situation... I've been asked to give an advise regarding which approach to take, in terms of J2EE development between Spring 3.0 and J2EE 6.0. I was, and still am, a promoter of Spring 2.5 over classic J2EE 5 development, specially with JBoss, I even migrated old apps to Spring and influenced the re-definition of the development policy here to include Spring specific APIs, and helped the development of a strategic plan to foster more lightweight solutions like Spring + Tomcat, instead of the heavier ones of JBoss, right now, we're using JBoss merely as a Web container, having what i call the "container inside the container paradox", that is, having Spring apps, with most of its APIs, running inside JBoss, So we're in the process of migrating to tomcat. However, with the coming of J2EE 6.0 many features, that made Spring attractive at that time, easy deployment, less-coupling, even some sort of D.I, etc, seems to have been mimicked, in one way or the other. JSF 2.0, JPA 2.0, WebBeans, WebProfiles, etc. So, the question goes... From your point of view, how save, and logical, it is to continue to invest in a non-standard J2EE development framework like Spring given the new perspectives offered by J2EE 6.0? Can we talk about maybe 3 or 4 more years of Spring development, or do you recommend early adoption of J2EE 6.0 APIs and it's practices? I'll appreciate any insights with this...

    Read the article

  • Password hashing, salt and storage of hashed values

    - by Jonathan Leffler
    Suppose you were at liberty to decide how hashed passwords were to be stored in a DBMS. Are there obvious weaknesses in a scheme like this one? To create the hash value stored in the DBMS, take: A value that is unique to the DBMS server instance as part of the salt, And the username as a second part of the salt, And create the concatenation of the salt with the actual password, And hash the whole string using the SHA-256 algorithm, And store the result in the DBMS. This would mean that anyone wanting to come up with a collision should have to do the work separately for each user name and each DBMS server instance separately. I'd plan to keep the actual hash mechanism somewhat flexible to allow for the use of the new NIST standard hash algorithm (SHA-3) that is still being worked on. The 'value that is unique to the DBMS server instance' need not be secret - though it wouldn't be divulged casually. The intention is to ensure that if someone uses the same password in different DBMS server instances, the recorded hashes would be different. Likewise, the user name would not be secret - just the password proper. Would there be any advantage to having the password first and the user name and 'unique value' second, or any other permutation of the three sources of data? Or what about interleaving the strings? Do I need to add (and record) a random salt value (per password) as well as the information above? (Advantage: the user can re-use a password and still, probably, get a different hash recorded in the database. Disadvantage: the salt has to be recorded. I suspect the advantage considerably outweighs the disadvantage.) There are quite a lot of related SO questions - this list is unlikely to be comprehensive: Encrypting/Hashing plain text passwords in database Secure hash and salt for PHP passwords The necessity of hiding the salt for a hash Clients-side MD5 hash with time salt Simple password encryption Salt generation and Open Source software I think that the answers to these questions support my algorithm (though if you simply use a random salt, then the 'unique value per server' and username components are less important).

    Read the article

  • Float addition promoted to double?

    - by Andreas Brinck
    I had a small WTF moment this morning. Ths WTF can be summarized with this: float x = 0.2f; float y = 0.1f; float z = x + y; assert(z == x + y); //This assert is triggered! (Atleast with visual studio 2008) The reason seems to be that the expression x + y is promoted to double and compared with the truncated version in z. (If i change z to double the assert isn't triggered). I can see that for precision reasons it would make sense to perform all floating point arithmetics in double precision before converting the result to single precision. I found the following paragraph in the standard (which I guess I sort of already knew, but not in this context): 4.6.1. "An rvalue of type float can be converted to an rvalue of type double. The value is unchanged" My question is, is x + y guaranteed to be promoted to double or is at the compiler's discretion? UPDATE: Since many people has claimed that one shouldn't use == for floating point, I just wanted to state that in the specific case I'm working with, an exact comparison is justified. Floating point comparision is tricky, here's an interesting link on the subject which I think hasn't been mentioned.

    Read the article

  • TimeZone change to UTC while updating the Appointment

    - by Firoz Ansari
    I am using EWS 1.2 to send appointments. On creating new Appointments, TimeZone is showing properly on notification mail, but on updating the same appointment, it's TimeZone reset to UTC. Could anyone help me to fix this issue? Here is sample code to replicate the issue: ExchangeService service = new ExchangeService(ExchangeVersion.Exchange2010_SP1, TimeZoneInfo.FindSystemTimeZoneById("Eastern Standard Time")); service.Credentials = new WebCredentials("ews_calendar", PASSWORD, "acme"); service.Url = new Uri("https://acme.com/EWS/Exchange.asmx"); Appointment newAppointment = new Appointment(service); newAppointment.Subject = "Test Subject"; newAppointment.Body = "Test Body"; newAppointment.Start = new DateTime(2012, 03, 27, 17, 00, 0); newAppointment.End = newAppointment.Start.AddMinutes(30); newAppointment.RequiredAttendees.Add("[email protected]"); //Attendees get notification mail for this appointment using (UTC-05:00) Eastern Time (US & Canada) timezone //Here is the notification content received by attendees: //When: Tuesday, March 27, 2012 5:00 PM-5:30 PM. (UTC-05:00) Eastern Time (US & Canada) newAppointment.Save(SendInvitationsMode.SendToAllAndSaveCopy); // Pull existing appointment string itemId = newAppointment.Id.ToString(); Appointment existingAppointment = Appointment.Bind(service, new ItemId(itemId)); //Attendees get notification mail for this appointment using UTC timezone //Here is the notification content received by attendees: //When: Tuesday, March 27, 2012 11:00 PM-11:30 PM. UTC existingAppointment.Update(ConflictResolutionMode.AlwaysOverwrite, SendInvitationsOrCancellationsMode.SendToAllAndSaveCopy);

    Read the article

  • Detecting branch reintegration or merge in pre-commit script

    - by Shawn Chin
    Within a pre-commit script, is it possible (and if so, how) to identify commits stemming from an svn merge? svnlook changed ... shows files that have changed, but does not differentiate between merges and manual edits. Ideally, I would also like to differentiate between a standard merge and a merge --reintegrate. Background: I'm exploring the possibility of using pre-commit hooks to enforce SVN usage policies for our project. One of the policies state that some directories (such as /trunk) should not be modified directly, and changed only through the reintegration of feature branches. The pre-commit script would therefore reject all changes made to these directories apart from branch reintegrations. Any ideas? Update: I've explored the svnlook command, and the closest I've got is to detect and parse changes to the svn:mergeinfo property of the directory. This approach has some drawback: svnlook can flag up a change in properties, but not which property was changed. (a diff with the proplist of the previous revision is required) By inspecting changes in svn:mergeinfo, it is possible to detect that svn merge was run. However, there is no way to determine if the commits are purely a result of the merge. Changes manually made after the merge will go undetected. (related post: Diff transaction tree against another path/revision)

    Read the article

  • Scheme: what are the benefits of letrec?

    - by Ixmatus
    While reading "The Seasoned Schemer" I've begun to learn about letrec. I understand what it does (can be duplicated with a Y-Combinator) but the book is using it in lieu of recurring on the already defined function operating on arguments that remain static. An example of an old function using the defined function recurring on itself (nothing special): (define (substitute new old lat) (cond ((null? l) '()) ((eq? (car l) old) (cons new (substitute new old (cdr l)))) (else (cons (car l) (substitute new old (cdr l)))))) Now for an example of that same function but using letrec: (define (substitute new old lat) (letrec ((replace (lambda (l) (cond ((null? l) '()) ((eq? (car l) old) (cons new (replace (cdr l)))) (else (cons (car l) (replace (cdr l)))))))) (replace lat))) Aside from being slightly longer and more difficult to read I don't know why they are rewriting functions in the book to use letrec. Is there a speed enhancement when recurring over a static variable this way because you don't keep passing it?? Is this standard practice for functions with arguments that remain static but one argument that is reduced (such as recurring down the elements of a list)? Some input from more experienced Schemers/LISPers would help!

    Read the article

  • SQL Server: What locale should be used to format numeric values into SQL Server format?

    - by Ian Boyd
    It seems that SQL Server does not accept numbers formatted using any particular locale. It also doesn't support locales that have digits other than 0-9. For example, if the current locale is bengali, then the number 123456789 would come out as "?????????". And that's just the digits, nevermind what the digit grouping would be. But the same problem happens for numbers in the Invariant locale, which formats numbers as "123,456,789", which SQL Server won't accept. Is there a culture that matches what SQL Server accepts for numeric values? Or will i have to create some custom "sql server" culture, generating rules for that culture myself from lower level formatting routines? If i was in .NET (which i'm not), i could peruse the Standard Numeric Format strings. Of the format codes available in .NET: c (Currency): $123.46 d (Decimal): 1234 e (Exponentional): 1.052033E+003 f (Fixed Point): 1234.57 g (General): 123.456 n (Number): 1,234.57 p (Percent): 100.00 % r (Round Trip): 123456789.12345678 x (Hexadecimal): FF Only 6 accept all numeric types: c (Currency): $123.46 d (Decimal): 1234 e (Exponentional): 1.052033E+003 f (Fixed Point): 1234.57 g (General): 123.456 n (Number): 1,234.57 p (Percent): 100.00 % r (Round Trip): 123456789.12345678 x (Hexadecimal): FF And of those only 2 generate string representations, in the en-US locale anyway, that would be accepted by SQL Server: c (Currency): $123.46 d (Decimal): 1234 e (Exponentional): 1.052033E+003 f (Fixed Point): 1234.57 g (General): 123.456 n (Number): 1,234.57 p (Percent): 100.00 % r (Round Trip): 123456789.12345678 x (Hexadecimal): FF Of the remaining two, fixed is dependant on the locale's digits, rather than the number being used, leaving General g format: c (Currency): $123.46 d (Decimal): 1234 e (Exponentional): 1.052033E+003 f (Fixed Point): 1234.57 g (General): 123.456 n (Number): 1,234.57 p (Percent): 100.00 % r (Round Trip): 123456789.12345678 x (Hexadecimal): FF And i can't even say for certain that the g format won't add digit groupings (e.g. 1,234). Is there a locale that formats numbers in the way SQL Server expects? Is there a .NET format code? A java format code? A Delphi format code? A VB format code? A stdio format code? latin-numeral-digits

    Read the article

  • Qt and variadic functions

    - by Noah Roberts
    OK, before lecturing me on the use of C-style variadic functions in C++...everything else has turned out to require nothing short of rewriting the Qt MOC. What I'd like to know is whether or not you can have a "slot" in a Qt object that takes an arbitrary amount/type of arguments. The thing is that I really want to be able to generate Qt objects that have slots of an arbitrary signature. Since the MOC is incompatible with standard preprocessing and with templates, it's not possible to do so with either direct approach. I just came up with another idea: struct funky_base : QObject { Q_OBJECT funky_base(QObject * o = 0); public slots: virtual void the_slot(...) = 0; }; If this is possible then, because you can make a template that is a subclass of a QObject derived object so long as you don't declare new Qt stuff in it, I should be able to implement a derived templated type that takes the ... stuff and turns it into the appropriate, expected types. If it is, how would I connect to it? Would this work? connect(x, SIGNAL(someSignal(int)), y, SLOT(the_slot(...))); If nobody's tried anything this insane and doesn't know off hand, yes I'll eventually try it myself...but I am hoping someone already has existing knowledge I can tap before possibly wasting my time on it.

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • Seperate html pages for each screen in Jquery mobile

    - by vrs
    I am newbie to Jquery Mobile, so far what ever examples i searched contains only one html page for whole application, with multipe div tags where each page/screen is defined as div tag with data-role as page with some header and footers optionally. Based on user actions, we are hiding some div's(pages) and showing only expected page. Also, this multi-page template seems to be standard design, as written by some blogs. Are there any other designing ways? what I would like to have is multipe html pages, for ex one for login, one for home, one for contact etc. Other wise it is difficult to understand/code/debug issues, especially people from Java background like me.So, what I want is some kind of MVC design with JQueryMobile, like each view/screen as sepearate html associated with one js (Controller). Can we have multiple html pages in JqueryMobile app? If possible how to pass data/ maintain session between them? Any samples are most welcome. Thanks In Advance. Note: Also I don't want server side includes, may app contains 10 to 15 screens, each page will make a webservice call and fetch some data and map it to UI.

    Read the article

  • problem loading texture with transparency with OpenGL ES and Android

    - by Evan Kimia
    Im trying to load an image that has background transparency that will be layered over another texture. When i try and load it, all i get is a white screen. The texture is 512 by 512, and its saved in photoshop as a 24 bit PNG (standard PNG specs in the Photoshop Save for Web and Devices config window). Any idea why its not showing? The texture without transparency shows without a problem. Here is my loadTextures method: public void loadGLTexture(GL10 gl, Context context) { //Get the texture from the Android resource directory Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.m1); Bitmap normalScheduleLines = BitmapFactory.decodeResource(context.getResources(), R.drawable.m1n); //Generate texture pointers... gl.glGenTextures(3, textures, 0); //...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[1]); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR_MIPMAP_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_GENERATE_MIPMAP, GL11.GL_TRUE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); //Bind our normal schedule bus map lines gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR_MIPMAP_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_GENERATE_MIPMAP, GL11.GL_TRUE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_RGBA, normalScheduleLines, 0); normalScheduleLines.recycle(); }

    Read the article

  • [Wordpress] How do you remove a Category-style (hierarchical) taxonomy metabox

    - by Manny Fleurmond
    I was wondering if someone can help me with this. I'm currently following Shibashake's tutorial about creating custom meta-boxes that include taxonomy selection here: http://shibashake.com/wordpress-theme/wordpress-custom-taxonomy-input-panels . They show how to remove the standard metabox Wordpress automatically creates for taxonomies using the remove_meta_box function. Only problem is that the function for some reason doesn't seem to work on taxonomies that work as categories ie ones where the hierarchical option is set to true. I know I have the function working because the ones set up as tags disappear easily enough. I can't if it just isn't possible or if there is something special I need to add in one of the parameters to make it work. Example: $args = array( 'hierarchical' => false, 'label' =>'People', 'query_var' => true, 'rewrite' => true ); register_taxonomy('people', 'post',$args); remove_meta_box('tagsdiv-people','post','side'); That works fine. If I set hierarchical to 'true, however, the meta box stays put. Can anyone shed some light?

    Read the article

  • Linq to sql Repository pattern , Some doubts

    - by MindlessProgrammer
    I am using repository pattern with linq to sql, I am using a repository class per table. I want to know , am i doing at good/standard way, ContactRepository Contact GetByID() Contact GetAll() COntactTagRepository List<ContactTag> Get(long contactID) List<ContactTag> GetAll() List<ContactTagDetail> GetAllDetails() class ContactTagDetail { public Contact Contact {get;set;} public ContactTag COntactTag {get;set;} } When i need a contact i call method in contactrepository, same for contacttag but when i need contact and tags together i call GetDetais() in ContactTag repository its not returning the COntactTag entity generated by the orm insted its returning ContactTagDetail entity conatining both COntact and COntactTag generated by the orm, i know i can simple call GetAll in COntactTag repository and can access Contact.ContactTag but as its linq to sql it will there is no option to Deferred Load in query level, so whenever i need a entity with a related entity i create a projection class Another doubt is where i really need to right the method i can do it in both contact & ContactTag repostitory like In contact repository GetALlWithTags() or something but i am doing it in in COntactTag repository Whats your suggestions ?

    Read the article

  • How to use Data aware controls "correctly"?

    - by lyborko
    Hi, I would like to ask experienced users, if you prefer to use data aware controls to add, insert, delete and edit data in DB or you favor to do it manualy. I developed some DB applications, in which for the sake of "user friendly policy" I run into complicated web of table events (afterinsert, afteredit, after... and beforeedit, beforeinsert, before...). After that it was a quite nasty work to debug the application. Aware of this risk (later by another application) I tried to avoid this problem, so I paid increased attention to write code well, readable and comprehensive. It seemed everything all right from the beginning, but as I needed to handle some preprocessing stuff before sending and loading data etc, I run into the same problems again, "slowly and inevitably". Sometime I could not use dataaware controls anyway, and what seemed to be a "cool" feature of DAControl at the beginning it turned to an obstacle on the end. I "had to" write special routine for non-dataaware controls, in order to behave as dataaware. Then I asked myself, why on earth should I use dataaware controls? Is it better to found application architecture on non-dataaware controls? It requires more time to write bug-proof code, of course, but does it worth of it? I do not know... I happened to me several times, like jinxed : paradise on the beginning hell on the end... I do not know, if I use wrong method to write DB program, if there is some standard common practice how to proceed. Or if it is common problem to everybody? Thanx for advices and your experiences

    Read the article

  • Git repository gets corrupted when I do a large commit: "Possible repository corruption on the remot

    - by mindthief
    Hi All, A friend of mine and I have been trying to use git for a project. It is hosted on his server, and I git clone it as: git clone [email protected]:/path/to/git/repos.git Pretty standard stuff, and it works great for a while. But every time one of us has added a large commit (which git supposedly handles very well), of the order of 100MB or so, the git repository gets kind of broken. Basically, at this point I will be able to push new changes and pull other changes (I think), but when I try to clone the repository in a fresh location using that command above, I get an error message that says: $git clone [email protected]:/path/to/git/repos.git Initialized empty Git repository in /local/path/to/repos/.git/ remote: Counting objects: 1455, done. remote: Compressing objects: 100% (1235/1235), done. error: git upload-pack: git-pack-objects died with error.s fatal: git upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed This has happened 3 or 4 times now, and it's always when I add a large commit. Any idea why this is happening? How can we fix it? We're both using Mac OSX Snow Leopard. Thanks! -M

    Read the article

  • gcc, strict-aliasing, and horror stories

    - by Joseph Quinsey
    In http://stackoverflow.com/questions/2906365/gcc-strict-aliasing-and-casting-through-a-union I asked whether anyone had encountered problems with union punning through pointers. So far, the answer seems to be No. This question is broader: do you have any horror stories about gcc and strict-aliasing? Background: Quoting from AndreyT's answer in http://stackoverflow.com/questions/2771023/c99-strict-aliasing-rules-in-c-gcc/2771041#2771041: "Strict aliasing rules are rooted in parts of the standard that were present in C and C++ since the beginning of [standardized] times. The clause that prohibits accessing object of one type through a lvalue of another type is present in C89/90 (6.3) as well as in C++98 (3.10/15). ... It is just that not all compilers wanted (or dared) to enforce it or rely on it." Well, gcc is now daring to do so, with its -fstrict-aliasing switch. And this has caused some problems. See, for example, the excellent article http://davmac.wordpress.com/2009/10/ about a Mysql bug, and the equally excellent discussion in http://cellperformance.beyond3d.com/articles/2006/06/understanding-strict-aliasing.html. Some other less-relevant links: http://stackoverflow.com/questions/1225741/performance-impact-of-fno-strict-aliasing http://stackoverflow.com/questions/754929/strict-aliasing http://stackoverflow.com/questions/262379/when-is-char-safe-for-strict-pointer-aliasing http://stackoverflow.com/questions/725138/how-to-detect-strict-aliasing-at-compile-time So to repeat, do you have a horror story of your own? Problems not indicated by -Wstrict-aliasing would, of course, be preferred. And other C compilers are also welcome.

    Read the article

  • Destructors not called when native (C++) exception propagates to CLR component

    - by Phil Nash
    We have a large body of native C++ code, compliled into DLLs. Then we have a couple of dlls containing C++/CLI proxy code to wrap the C++ interfaces. On top of that we have C# code calling into the C++/CLI wrappers. Standard stuff, so far. But we have a lot of cases where native C++ exceptions are allowed to propagate to the .Net world and we rely on .Net's ability to wrap these as System.Exception objects and for the most part this works fine. However we have been finding that destructors of objects in scope at the point of the throw are not being invoked when the exception propagates! After some research we found that this is a fairly well known issue. However the solutions/ workarounds seem less consistent. We did find that if the native code is compiled with /EHa instead of /EHsc the issue disappears (at least in our test case it did). However we would much prefer to use /EHsc as we translate SEH exceptions to C++ exceptions ourselves and we would rather allow the compiler more scope for optimisation. Are there any other workarounds for this issue - other than wrapping every call across the native-managed boundary in a (native) try-catch-throw (in addition to the C++/CLI layer)?

    Read the article

  • Storing arbitrary data in HTML

    - by Rob Colburn
    What is the best way to embed data in html elements for later use? As an example, let's say we have jQuery returning some JSON from the server, and we want to dump that datat out to the user as paragraphs. However, we want to be able to attach meta-data to these elements, so we can events for these later. The way I tend to handle this, is with some ugly prefixing function handle_response(data) { var html = ''; for (var i in data) { html += '<p id="prefix_' + data[i].id + '">' + data[i].message + '</p>'; } jQuery('#log').html(html).find('p').click(function(){ alert('The ID is: ' + $(this).attr('id').substr(7)); }); } Alternatively, one can build a Form in the paragraph, and store your meta-data there. But, that often feels like overkill. This has been asked before in different ways, but I do not feel it's been answered well: http://stackoverflow.com/questions/432174/how-to-store-arbitrary-data-for-some-html-tags http://stackoverflow.com/questions/209428/non-standard-attributes-on-html-tags-good-thing-bad-thing-your-thoughts

    Read the article

  • Static content not displayed with Zend FW

    - by shin
    I am trying to display a static content with Zend framework. When I go to http://square.localhost/content/services, I get an error message. Could anyone tell me how to fix this please? Thanks in advance. application.ini .... .... resources.layout.layoutPath = APPLICATION_PATH "/layouts" resources.layout.layout = "master" resources.router.routes.home.route = /home resources.router.routes.home.defaults.module = default resources.router.routes.home.defaults.controller = index resources.router.routes.home.defaults.action = index resources.router.routes.static-content.route = /content/:page resources.router.routes.static-content.defaults.module = default resources.router.routes.static-content.defaults.controller = static-content resources.router.routes.static-content.defaults.action = display application/modules/default/controllers/StaticContentController.php class StaticContentController extends Zend_Controller_Action { public function init() { } // display static views public function displayAction() { $page = $this->getRequest()->getParam('page'); if (file_exists($this->view->getScriptPath(null) . "/" . $this->getRequest()->getControllerName() . "/$page." . $this->viewSuffix)) { $this->render($page); } else { throw new Zend_Controller_Action_Exception('Page not found', 404); } } } application/modules/default/views/scripts/static-content/services.phtml some html ... ... Error message An error occurred Page not found Exception information: Message: Page not found Stack trace: #0 /var/www/square/library/Zend/Controller/Action.php(513): StaticContentController->displayAction() #1 /var/www/square/library/Zend/Controller/Dispatcher/Standard.php(295): Zend_Controller_Action->dispatch('displayAction') #2 /var/www/square/library/Zend/Controller/Front.php(954): Zend_Controller_Dispatcher_Standard->dispatch(Object(Zend_Controller_Request_Http), Object(Zend_Controller_Response_Http)) #3 /var/www/square/library/Zend/Application/Bootstrap/Bootstrap.php(97): Zend_Controller_Front->dispatch() #4 /var/www/square/library/Zend/Application.php(366): Zend_Application_Bootstrap_Bootstrap->run() #5 /var/www/square/public/index.php(26): Zend_Application->run() #6 {main} Request Parameters: array ( 'page' => 'services', 'module' => 'default', 'controller' => 'static-content', 'action' => 'display', )

    Read the article

  • Using the Proxy pattern with C++ iterators

    - by Billy ONeal
    Hello everyone :) I've got a moderately complex iterator written which wraps the FindXFile apis on Win32. (See previous question) In order to avoid the overhead of constructing an object that essentially duplicates the work of the WIN32_FIND_DATAW structure, I have a proxy object which simply acts as a sort of const reference to the single WIN32_FIND_DATAW which is declared inside the noncopyable innards of the iterator. This is great because Clients do not pay for construction of irrelevant information they will probably not use (most of the time people are only interested in file names), and Clients can get at all the information provided by the FindXFile APIs if they need or want this information. This becomes an issue though because there is only ever a single copy of the object's actual data. Therefore, when the iterator is incrememnted, all of the proxies are invalidated (set to whatever the next file pointed to by the iterator is). I'm concerned if this is a major problem, because I can think of a case where the proxy object would not behave as somebody would expect: std::vector<MyIterator::value_type> files; std::copy(MyIterator("Hello"), MyIterator(), std::back_inserter(files)); because the vector contains nothing but a bunch of invalid proxies at that point. Instead, clients need to do something like: std::vector<std::wstring> filesToSearch; std::transform( DirectoryIterator<FilesOnly>(L"C:\\Windows\\*"), DirectoryIterator<FilesOnly>(), std::back_inserter(filesToSearch), std::mem_fun_ref(&DirectoryIterator<FilesOnly>::value_type::GetFullFileName) ); Seeing this, I can see why somebody might dislike what the standard library designers did with std::vector<bool>. I'm still wondering though: is this a reasonable trade off in order to achieve (1) and (2) above? If not, is there any way to still achieve (1) and (2) without the proxy?

    Read the article

  • JavaME - LWUIT images eat up all the memory

    - by Marko
    Hi, I'm writing a MIDlet using LWUIT and images seem to eat up incredible amounts of memory. All the images I use are PNGs and are packed inside the JAR file. I load them using the standard Image.createImage(URL) method. The application has a number of forms and each has a couple of labels an buttons, however I am fairly certain that only the active form is kept in memory (I know it isn't very trustworthy, but Runtime.freeMemory() seems to confirm this). The application has worked well in 240x320 resolution, but moving it to 480x640 and using appropriately larger images for UI started causing out of memory errors to show up. What the application does, among other things, is download remote images. The application seems to work fine until it gets to this point. After downloading a couple of PNGs and returning to the main menu, the out of memory error is encountered. Naturally, I looked into the amount of memory the main menu uses and it was pretty shocking. It's just two labels with images and four buttons. Each button has three images used for style.setIcon, setPressedIcon and setRolloverIcon. Images range in size from 15 to 25KB but removing two of the three images used for every button (so 8 images in total), Runtime.freeMemory() showed a stunning 1MB decrease in memory usage. The way I see it, I either have a whole lot of memory leaks (which I don't think I do, but memory leaks aren't exactly known to be easily tracked down), I am doing something terribly wrong with image handling or there's really no problem involved and I just need to scale down. If anyone has any insight to offer, I would greatly appreciate it.

    Read the article

  • Alpha blending colors in .NET Compact Framwork 2.0

    - by Adam Haile
    In the Full .NET framework you can use the Color.FromArgb() method to create a new color with alpha blending, like this: Color blended = Color.FromArgb(alpha, color); or Color blended = Color.FromArgb(alpha, red, green , blue); However in the Compact Framework (2.0 specifically), neither of those prototypes are valid, you only get: Color.FromArgb(int red, int green, int blue); and Color.FromArgb(int val); The first one, obviously, doesn't even let you enter an alpha value, but the documentation for the latter shows that "val" is a 32bit ARGB value (as 0xAARRGGBB as opposed to the standard 24bit 0xRRGGBB), so it would make sense that you could just build the ARGB value and pass it to the function. I tried this with the following: private Color FromARGB(byte alpha, byte red, byte green, byte blue) { int val = (alpha << 24) | (red << 16) | (green << 8) | blue; return Color.FromArgb(val); } But no matter what I do, the alpha blending never works, the resulting color always as full opacity, even when setting the alpha value to 0. Has anyone gotten this to work on Compact Framework?

    Read the article

  • Admob banner not getting remove from superview

    - by Gamer
    I am developing one 2d game using cocos2d framework, in this game i am using admob for advertising, in some classes not in all classes but admob banner is visible in every class and after some time game getting crash also. I am not getting how admob banner is comes in every class in fact i have not declare in Rootviewcontroller class. can any one suggest me how to integrate Admob in cocos2d game, i want Admob banner in particular classes not in every class, I am using latest google admob sdk, my code is below: Thanks in advance ` -(void)AdMob{ NSLog(@"ADMOB"); CGSize winSize = [[CCDirector sharedDirector]winSize]; // Create a view of the standard size at the bottom of the screen. if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad){ bannerView_ = [[GADBannerView alloc] initWithFrame:CGRectMake(size.width/2-364, size.height - GAD_SIZE_728x90.height, GAD_SIZE_728x90.width, GAD_SIZE_728x90.height)]; } else { // It's an iPhone bannerView_ = [[GADBannerView alloc] initWithFrame:CGRectMake(size.width/2-160, size.height - GAD_SIZE_320x50.height, GAD_SIZE_320x50.width, GAD_SIZE_320x50.height)]; } if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) { bannerView_.adUnitID =@"a15062384653c9e"; } else { bannerView_.adUnitID =@"a15062392a0aa0a"; } bannerView_.rootViewController = self; [[[CCDirector sharedDirector]openGLView]addSubview:bannerView_]; [bannerView_ loadRequest:[GADRequest request]]; GADRequest *request = [[GADRequest alloc] init]; request.testing = [NSArray arrayWithObjects: GAD_SIMULATOR_ID, nil]; // Simulator [bannerView_ loadRequest:request]; } //best practice for removing the barnnerView_ -(void)removeSubviews{ NSArray* subviews = [[CCDirector sharedDirector]openGLView].subviews; for (id SUB in subviews){ [(UIView*)SUB removeFromSuperview]; [SUB release]; } NSLog(@"remove from view"); } //this makes the refreshTimer count -(void)targetMethod:(NSTimer *)theTimer{ //INCREASE OF THE TIMER AND SECONDS elapsedTime++; seconds++; //INCREASE OF THE MINUTOS EACH 60 SECONDS if (seconds>=60) { seconds=0; minutes++; [self removeSubviews]; [self AdMob]; } NSLog(@"TIME: %02d:%02d", minutes, seconds); } `

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >