Search Results

Search found 2361 results on 95 pages for 'incorrect'.

Page 67/95 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • dateByAddingComponents problem and getting difference of dates with NSDateComponents problem!

    - by Rob
    I am having problems with adding values to dates and also getting differences between dates. The dates and components calculated are incorrect. So for adding, if I add 1.5 months, I only get 1 month, however if I add any whole number ie (1 or 2 or 3 and etc) it calculates correctly. Float32 addAmount = 1.5; NSDateComponents *components = [[[NSDateComponents alloc] init] autorelease]; [components setMonth:addAmount]; NSCalendar *gregorian = [[[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar] autorelease]; [gregorian setTimeZone:[NSTimeZone timeZoneWithName:@"UTC"]]; NSDate *newDate2 = [gregorian dateByAddingComponents:components toDate:Date1 options:0]; Now for difference, if I have a date that has been added with exactly one year (almost same code as above), it adds correctly, but when the difference is calculated, I get 0 years, 11 months and 30 days. NSDate *startDate = Date1; NSDate *endDate = Date2; NSCalendar *gregorian = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; [gregorian setTimeZone:[NSTimeZone timeZoneWithName:@"UTC"]]; NSUInteger unitFlags = NSYearCalendarUnit | NSMonthCalendarUnit | NSDayCalendarUnit; NSDateComponents *components = [gregorian components:unitFlags fromDate:startDate toDate:endDate options:0]; NSInteger years = [components year]; NSInteger months = [components month]; NSInteger days = [components day]; What am I doing wrong? Also I have added the kCFCalendarComponentsWrap constanct in the options for both adding and difference functions but with no difference. Thanks

    Read the article

  • Hello i am using the android code to connect facebook but getting "Facebook Server Error + 104 - Inc

    - by Shalini Singh
    Hello i am using the android code to connect facebook but getting "Facebook Server Error + 104 - Incorrect signature" exception at the place of onLoginSuccess function. code is given bellow .... public class FacebookConnection extends Activity implements LoginListener { private FBRocket fbRocket; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); // You need to put in your Facebook API key here: fbRocket = new FBRocket(this, "test", "e2c8deda78b007466c54f48e6359e02e"); // Determine whether there exists a previously-saved Facebook: if (fbRocket.existsSavedFacebook()) { String str =fbRocket.getAPIKey(); Log.e("Api key", str); fbRocket.loadFacebook(); } else { fbRocket.login(R.layout.main); String str =fbRocket.getAPIKey(); Log.e("Api key", str); } } public void onLoginFail() { fbRocket.displayToast("Login failed!"); fbRocket.login(R.layout.main); } public void onLoginSuccess(Facebook facebook) { fbRocket.displayToast("Login success!******************"); // Set the logged-in user's status: try { facebook.setStatus("I am using Facebook -- it's great!"); String uid = facebook.getFriendUIDs().get(0); // Just get the uid of the first friend returned... fbRocket.displayDialog("Friend's name: " + facebook.getFriend(uid).name); // ... and retrieve this friend's name. } catch (ServerErrorException e) { // Check if the exception was caused by not being logged-in: if (e.notLoggedIn()) { // ...if it was, then login again: fbRocket.login(R.layout.main); } else { System.out.println(e); e.printStackTrace(); } } }

    Read the article

  • div insert in div

    - by lolalola
    Hi, what's wrong with this code? Now disappear background and div after this code show incorrect. But why when i delete "float: left", everything looks good. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <style type="text/css"> #first{ width: 200px; background-color: #345752; } #left_b{ background:transparent url('img/left.png'); background-position: left top; background-repeat: repeat-y; min-height: 30px; } #right_b{ background:transparent url('img/right.png'); background-position: right top; background-repeat: repeat-y; } #text{ float: left; width: 50px; height: 30px; } #text2{ float: left; width: 70px; height: 30px; } </style> </head> <body> <div id = "first"> <div id = "left_b"> <div id = "right_b"> <div id = "text"> text 1 </div> <div id = "text2"> text 2 </div> </div> </div> </div> </body> </html>

    Read the article

  • How to run autoconf in OSX linking to specified OS SDK

    - by kroko
    Hello! I have written a small command line tool that includes some GPL code. Everything runs smothly. Using os 10.6. The external code used has a config.h header file, made by calling autoconf. I'd like to deploy the tool to different OS versions. Thus config.h could look like // config.h #if MAC_OS_X_VERSION_MAX_ALLOWED == MAC_OS_X_VERSION_10_4 // autoconf created config.h content for 10.4 comes here #elif MAC_OS_X_VERSION_MAX_ALLOWED == MAC_OS_X_VERSION_10_5 // autoconf created config.h content for 10.5 comes here #elif MAC_OS_X_VERSION_MAX_ALLOWED == MAC_OS_X_VERSION_10_6 // autoconf created config.h content for 10.6 comes here #else #error "muahahaha" #endif What is the way to tell autoconf to use /Developer/SDKs/MacOSX10.XXXX.sdk/usr/ while generating the config.h? In order to test it I have run #!/bin/bash # for 10.6 export CC="/usr/bin/gcc-4.2" export CXX="/usr/bin/g++-4.2" export MACOSX_DEPLOYMENT_TARGET="10.6" export OSX_SDK="/Developer/SDKs/MacOSX10.6.sdk" export OSX_CFLAGS="-isysroot $OSX_SDK -arch x86_64 -arch i386" export OSX_LDFLAGS="-Wl,-syslibroot,$OSX_SDK -arch x86_64 -arch i386" export CFLAGS=$OSX_CFLAGS export CXXFLAGS=$OSX_CFLAGS export LDFLAGS=$OSX_LDFLAGS before calling ./configure on OS 10.6. I know that the configure script looks for libintl.h, which is not in the "out of box 10.6 / SDK", but is present in the local machine under /usr/local The config.h header file produced with method described above has info that libintl.h is in the system- thus "linking" autoconf only to SDK has failed. Is it happening because... "we don't have a crystal ball"? :). Or is it incorrect "setup"/flag-export before running autoconf, which I hope is the case? If so, then what would be the correct way to set up envvariables? Many thanks in advance.

    Read the article

  • Displaying an inverted pyramid of asterisks

    - by onyebuoke
    I help in my for my program. I am required 2 create a inverted pyramid of stars which rows depends on the number of stars the user keys, but I've done it so it does not give an inverted pyramid, it gives a regular pyramid. #include <stdio.h> #include <conio.h> void printchars(int no_star, char space); int getNo_of_rows(void); int main(void) { int numrows, rownum; rownum=0; numrows=getNo_of_rows(); for(rownum=0;rownum<=numrows;rownum++) { printchars(numrows-rownum, ' '); printchars((2*rownum-1), '*'); printf("\n"); } _getche(); return 0; } void printchars(int no_star, char space) { int cnt; for(cnt=0;cnt<no_star;cnt++) { printf("%c",space); } } int getNo_of_rows(void) { int no_star; printf("\n Please enter the number of stars you want to print\n"); scanf("%d",&no_star); while(no_star<1) { printf("\n number incorrect, please enter correct number"); scanf("%d",&no_star); } return no_star; }

    Read the article

  • Recurrence relation solution

    - by Travis
    I'm revising past midterms for a final exam this week and am trying to make sense of a solution my professor posted for one of past exams. (You can see the original pdf here, question #6). I'm given the original recurrence relation T(m)=3T(n/2) + n and am told T(1) = 1. I'm pretty sure the solution I've been given is wrong in a few places. The solution is as follows: Let n=2^m T(2^m) = 3T(2^(m-1)) + 2^m 3T(2^(m-1)) = 3^2*T(2^(m-2)) + 2^(m-1)*3 ... 3^(m-1)T(2) = T(1) + 2*3^(m-1) I'm pretty sure this last line is incorrect and they forgot to multiply T(1) by 3^m. He then (tries to) sum the expressions: T(2^m) = 1 + (2^m + 2^(m-1)*3 + ... + 2*3(m-1)) = 1 + 2^m(1 + (3/2)^1 + (3/2)^2 + ... + (3/2)^(m-1)) = 1 + 2^m((3/2)^m-1)*(1/2) = 1 + 3^m - 2^(m-1) = 1 + n^log 3 - n/2 Thus the algorithm is big Theta of (n^log 3). I'm pretty sure that he also got the summation wrong here. By my calculations this should be as follows: T(2^m) = 2^m + 3 * 2^(m-1) + 3^2 * 2^(m-2) + ... + 3^m (3^m because 3^m*T(1) = 3^m should be added, not 1) = 2^m * ((3/2)^1 + (3/2)^2 + ... + (3/2)^m) = 2^m * sum of (3/2)^i from i=0 to m = 2^m * ((3/2)^(m+1) - 1)/(3/2 - 1) = 2^m * ((3/2)^(m+1) - 1)/(1/2) = 2^(m+1) * 3^(m+1)/2^(m+1) - 2^(m+1) = 3^(m+1) - 2 * 2^m Replacing n = 2^m, and from that m = log n T(n) = 3*3^(log n) - 2*n n is O(3^log n), thus the runtime is big Theta of (3^log n) Does this seem right? Thanks for your help!

    Read the article

  • Pixel shader wierd compilation error

    - by ytrewq
    hi, I'm experiencing with shaders a bit and I keep getting this weird compilation error that's driving me crazy! the following pixel shader code snippet: DirectionVector = normalize(f3LightPosition[i] - PixelPos); LightVec = PixelNormal - DirectionVector; // Get the light strenght factor LightStrFactor = float(abs((LightVec.x + LightVec.y + LightVec.z) / 3.0f)); // TEST!!! LightStrFactor = 1.0f; // Add this light to the total light on this pixel LightVal += f4Light[i] * LightStrFactor; works perfectly, but as soon as i remove the "LightStrFactor = 1.0f;" line, i.e. letting 'LightStrFactor ' value be the result of the calculation above, it fails to compile the shader. LightStrFactor is a float LightVal & f4Light[i] are float4 All the rest are float3. my question is, besides why it doesn't compile, is how come DX compiler cares about the value of a float? even if my values are incorrect, shouldn't it be run-time? the shader compilation code is this: /* Compile the bitch */ if (FAILED(D3DXCompileShaderFromFile(fileName, NULL, NULL, "PS_MAIN", "ps_2_0", 0, &this->m_pCode, NULL, &this->m_constantTable))) GraphicException("Failed to compile pixel shader!"); // <-- gets here :( if (FAILED(g_D3dDevice->CreatePixelShader( (DWORD*)this->m_pCode->GetBufferPointer(), &this->m_hPixelShader ))) GraphicException("Failed to create pixel shader!"); this->m_fLoaded = true; any help is appreciated thanks!!! :]

    Read the article

  • ntpd on Fedora Core 6 with high negative time rest values

    - by Mark White
    The basic problem is we have a FC6 server instance running on a virtual machine, and the system time seems to have been slowly varying until it is now causing a problem. The server runs 24/7 and has been up for 155 days. It has been changed to show GMT, and reports the time as (example) 00:15:15 GMT whereas the actual time is 00:00:00 GMT. This is an offset of 915 seconds. selinux has been changed to 'setenforce 0' for testing and I am running as root. I stop the ntpd service and change the time in System|Administration|Date & Time. The time still shows the same with 'date' in bash. There are no error logs. I change the date with 'date --set' in bash. The response confirms the changed date. I run 'date' and the incorrect date is shown. There are no error logs. I start the ntpd service and /var/log/messages shows success with 'time reset -915.720139s'. The date remains unchanged. ntpq -p shows three three time servers all have offsets of around -915 seconds. I stop ntpd service and try 'ntpd -gqx' and get the same result as above - success, but a large negative time reset. I've tried varying combinations of the above, and a few more settings in System|Administration|Date & Time - no change. I just need to reset the system time to GMT. No offset. But I can't wait for ntpd to slew the time over the next few weeks. Any advice is welcome, cheers! Sure this shouldn't be this difficult... Mark...

    Read the article

  • Relation between HTTP Keep Alive duration and TCP timeout duration

    - by Suresh Kumar
    I am trying to understand the relation between TCP/IP and HTTP timeout values. Are these two timeout values different or same? Most Web servers allow users to set the HTTP Keep Alive timeout value through some configuration. How is this value used by the Web servers? is this value just set on the underlying TCP/IP socket i.e is the HTTP Keep Alive timeout and TCP/IP Keep Alive Timeout same? or are they treated differently? My understanding is (maybe incorrect): The Web server uses the default timeout on the underlying TCP socket (i.e. indefinite) regardless of the configured HTTP Keep Alive timeout and creates a Worker thread that counts down the specified HTTP timeout interval. When the Worker thread hits zero, it closes the connection. EDIT: My question is about the relation or difference between the two timeout durations i.e. what will happen when HTTP keep-alive timeout duration and the timeout on the Socket (SO_TIMEOUT) which the Web server uses is different? should I even worry about these two being same or not?

    Read the article

  • plugin from github not successfully installing

    - by JohnMerlino
    Hey all, I tried to install the highcharts-rails plugin from github as specified in the instructions: Installation Get the plugin: script/plugin install git://github.com/loudpixel/highcharts-rails.git Run the rake setup: rake highcharts_rails:install But when I run the script/plugin install... It installs a couple of files only and not all the required files, I presume, because when I run rake highcharts_rails:install I get the following: rake aborted! Don't know how to build task 'highcharts_rails:install' All it installed for me was: jquery.js jrails.js jquery-ui.js I noticed on the site http://github.com/loudpixel/highcharts-rails It has all this: file MIT-LICENSE February 08, 2010 Initial commit [abbottry] file README.md February 09, 2010 Added installation section to README [jsiarto] file Rakefile February 08, 2010 Initial commit [abbottry] directory generators/ February 08, 2010 Initial commit [abbottry] file init.rb February 08, 2010 Initial commit [abbottry] directory javascripts/ February 08, 2010 Added jquery 1.3.2 script [abbottry] directory lib/ February 08, 2010 Initial commit [abbottry] directory tasks/ February 08, 2010 Incorrect path to plugin for rake task [abbottry] directory test/ February 08, 2010 Initial commit [abbottry] file uninstall.rb February 08, 2010 Initial commit [abbottry] So I'm not sure what I'm doing wrong to not get these files installed properly. Thanks for any response.

    Read the article

  • Not able to use spring Beans outside container . Always picking up WebSphere Context

    - by Abhijit
    We have the a whole lot of spring bean defined in our project which we deploy in Websphere ^. One example being the following: oracle.jdbc.driver.OracleDriver jdbc:oracle:thin:@oracle:1521:OASIS oasis_owner o3ngin33r Now we have a service locator class like following private static ServiceLocator serviceLocator = new ServiceLocator(); private static ApplicationContext beanFactory = new ClassPathXmlApplicationContext(APPLICATION_CONTEXT_LOCATION); protected ServiceLocator() { } public ApplicationContext getBeanFactory() { return beanFactory; } public static ServiceLocator getInstance() { return serviceLocator; Now when I am trying to do this from my jUnit ServiceLocator.getInstance().getBean("oasJdbcData"; getting the following exception Caused by: javax.naming.ServiceUnavailableException: Could not obtain an initial context due to a communication failure. Since no provider URL was specified, the default provider URL of "corbaloc:iiop:[email protected]:2809/NameService" was used. Make sure that any bootstrap address information in the URL is correct and that the target name server is running. Possible causes other than an incorrect bootstrap address or unavailable name server include the network environment and workstation network configuration. [Root exception is org.omg.CORBA.TRANSIENT: java.net.ConnectException: Connection refused: connect:host=192.168.255.1,port=2809 vmcid: IBM minor code: E02 completed: No] at com.ibm.ws.naming.util.WsnInitCtxFactory.mapInitialReferenceFailure(WsnInitCtxFactory.java:1968) at com.ibm.ws.naming.util.WsnInitCtxFactory.mergeWsnNSProperties(WsnInitCtxFactory.java:1172) at com.ibm.ws.naming.util.WsnInitCtxFactory.getRootContextFromServer(WsnInitCtxFactory.java:720) at com.ibm.ws.naming.util.WsnInitCtxFactory.getRootJndiContext(WsnInitCtxFactory.java:643) at com.ibm.ws.naming.util.WsnInitCtxFactory.getInitialContextInternal(WsnInitCtxFactory.java:489) at com.ibm.ws.naming.util.WsnInitCtx.getContext(WsnInitCtx.java:113) at com.ibm.ws.naming.util.WsnInitCtx.getContextIfNull(WsnInitCtx.java:428) at com.ibm.ws.naming.util.WsnInitCtx.lookup(WsnInitCtx.java:144) at javax.naming.InitialContext.lookup(InitialContext.java:361) at org.springframework.jndi.JndiTemplate$1.doInContext(JndiTemplate.java:155) at org.springframework.jndi.JndiTemplate.execute(JndiTemplate.java:88) at org.springframework.jndi.JndiTemplate.lookup(JndiTemplate.java:153) at org.springframework.jndi.JndiTemplate.lookup(JndiTemplate.java:178) at org.springframework.jndi.JndiLocatorSupport.lookup(JndiLocatorSupport.java:104) at org.springframework.jndi.JndiObjectLocator.lookup(JndiObjectLocator.java:105) at org.springframework.jndi.JndiObjectFactoryBean.lookupWithFallback(JndiObjectFactoryBean.java:200) at org.springframework.jndi.JndiObjectFactoryBean.afterPropertiesSet(JndiObjectFactoryBean.java:186) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1368) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1334) ... 33 more which clearly shows it is looking for IntialContext of Websphere which I dont want Any body can tell me what I am doing wrong?

    Read the article

  • Help with Custom Workflow that monitors an objects state

    - by zSysop
    I need to write a workflow that monitors the status of an object. (It could wait for days or hours for a state to change) I have the following states for the object (lets call it an Issue object): 1) Created 2) Unowned 3) Owned 4) UnAssigned 4) Assigned 6) In Progress 7) Signed Off 8) Closed I would also need to take some action on an object if the object was within a certain state for a defined period (not really sure on how this can be accomplished either). The object's owner/assignee can change at any point (i.e. Go from In Progress to UnOwned) so i am guessing that a state machine diagram is what i would need to use. If my thinking is incorrect then please let me know. My application is written in c# .net 3.5. I was thinking about having a service method called CreateIssue that would insert the ticket into the db and then begin an instance of a workflow (with the object or an id of the object as parameters). I wasn't sure of how the workflow would then know when a particular object has been updated, or if the object's state has changed. I've done some really simple "hello world" type of apps with windows workflow foundation 3.5 but have not yet grasped how to do go about implementing something like this. Any direction on this will be extremely helpful. Thanks in advance.

    Read the article

  • Converting a size_t into an integer (c++)

    - by JeanOTF
    Hello, I've been trying to make a for loop that will iterate based off of the length of a network packet. In the API there exists a variable (size_t) by event.packet-dataLength. I want to iterate from 0 to event.packet-dataLength - 7 increasing i by 10 each time it iterates but I am having a world of trouble. I looked for solutions but have been unable to find anything useful. I tried converting the size_t to an unsigned int and doing the arithmetic with that but unfortunately it didn't work. Basically all I want is this: for (int i = 0; i < event.packet->dataLength - 7; i+=10) { } Though every time I do something like this or attempt at my conversions the i < # part is a huge number. They gave a printf statement in a tutorial for the API which used "%u" to print the actual number however when I convert it to an unsigned int it is still incorrect. I'm not sure where to go from here. Any help would be greatly appreciated :)

    Read the article

  • Detecting duplicate values in a column of a Datatable while traversing through It

    - by Ashish Gupta
    I have a Datatable with Id(guid) and Name(string) columns. I traverse through the data table and run a validation criteria on the Name (say, It should contain only letters and numbers) and then adding the corresponding Id to a List If name passes the validation. Something like below:- List<Guid> validIds=new List<Guid>(); foreach(DataRow row in DataTable1.Rows) { if(IsValid(row["Name"]) { validIds.Add((Guid)row["Id"]); } } In addition to this validation I should also check If the name is not repeating in the whole datatable (even for the case-sensitiveness), If It is repeating, I should not add the corresponding Id in the List. Things I am thinking/have thought about:- 1) I can have another List, check for the "Name" in the same, If It exists, will add the corresponding Guild 2) I cannot use HashSet as that would treat "Test" and "test" as different strings and not duplicates. 3) Take the DataTable to another one where I have the disctict names (this I havent tried and the code might be incorrect, please correct me whereever possible) DataTable dataTableWithDistinctName = new DataTable(); dataTableWithDistinctName.CaseSensitive=true CopiedDataTable=DataTable1.DefaultView.ToTable(true,"Name"); I would loop through the original datatable and check the existence of the "Name" in the CopiedDataTable, If It exists, I wont add the Id to the List. Are there any better and optimum way to achieve the same? I need to always think of performance. Although there are many related questions in SO, I didnt find a problem similar to this. If you could point me to a question similar to this, It would be helpful. Thanks

    Read the article

  • Accessing a com object's vtable in c#

    - by Martin Booth
    I'm attempting to access a com object's vtable in memory and getting nowhere. From what I have read, the first 4 bytes in memory of a com object should be a pointer to the vtableptr, which in turn is a pointer to the vtable. Although I'm not sure I expect to find my test method at slot 7 in this com object I have tried them all and nothing looks like the address of my function. If anyone can make the necessary changes to this sample to get it to work (the aim is just to try and invoke the Test method on the Tester class by accessing the com object's vtable and locating the method in whichever row it turns up) I would be very grateful. I know it is rare that anyone would need to do this, but please accept I do need to do something quite similar and have considered the alternatives Thanks in advance [ComVisible(true)] public class Tester { public void Test() { MessageBox.Show("Here"); } } public delegate void TestDelegate(); private void main() { Tester t = new Tester(); GCHandle objectHandle = GCHandle.Alloc(t); IntPtr objectPtr = GCHandle.ToIntPtr(objectHandle); IntPtr vTable = (IntPtr)Marshal.ReadInt32(objectPtr); IntPtr vTablePtr = (IntPtr)Marshal.ReadInt32(vTable); IntPtr functionPtr = (IntPtr)Marshal.ReadInt32(vTablePtr, 7 * 4); // 7 may be incorrect but i have tried many different values TestDelegate func = (TestDelegate)Marshal.GetDelegateForFunctionPointer(functionPtr, typeof(TestDelegate)); func(); objectHandle.Free(); }

    Read the article

  • Git-svn refuses to create branch on svn repository error: "not in the same repository"

    - by Danny
    I am attempting to create a svn branch using git-svn. The repository was created with --stdlayout. Unfortunately it generates an error stating the "Source and dest appear not to be in the same repository". The error appears to be the result of it not including the username in the source url. $ git svn branch foo-as-bar -m "Attempt to make Foo into Bar." Copying svn+ssh://my.foo.company/r/sandbox/foo/trunk at r1173 to svn+ssh://[email protected]/r/sandbox/foo/branches/foo-as-bar... Trying to use an unsupported feature: Source and dest appear not to be in the same repository (src: 'svn+ssh://my.foo.company/r/sandbox/foo/trunk'; dst: 'svn+ssh://[email protected]/r/sandbox/foo/branches/foo-as-bar') at /home/me/.install/git/libexec/git-core/git-svn line 610 I intially thought this was simply a configuration issue, examination of .git/config doesn't suggest anything incorrect. [svn-remote "svn"] url = svn+ssh://[email protected]/r fetch = sandbox/foo/trunk:refs/remotes/trunk branches = sandbox/foo/branches/*:refs/remotes/* tags = sandbox/foo/tags/*:refs/remotes/tags/* I am using git version 1.6.3.3. Can anyone shed any light on why this might be occuring, and how best to address it?

    Read the article

  • Required Working Precision for the BBP Algorithm?

    - by brainfsck
    Hello, I'm looking to compute the nth digit of Pi in a low-memory environment. As I don't have decimals available to me, this integer-only BBP algorithm in Python has been a great starting point. I only need to calculate one digit of Pi at a time. How can I determine the lowest I can set D, the "number of digits of working precision"? D=4 gives me many correct digits, but a few digits will be off by one. For example, computing digit 393 with precision of 4 gives me 0xafda, from which I extract the digit 0xa. However, the correct digit is 0xb. No matter how high I set D, it seems that testing a sufficient number of digits finds an one where the formula returns an incorrect value. I've tried upping the precision when the digit is "close" to another, e.g. 0x3fff or 0x1000, but cannot find any good definition of "close"; for instance, calculating at digit 9798 gives me 0xcde6 , which is not very close to 0xd000, but the correct digit is 0xd. Can anyone help me figure out how much working precision is needed to calculate a given digit using this algorithm? Thank you,

    Read the article

  • Objective-C memory model

    - by TofuBeer
    I am attempting to wrap my head around one part of the Objective-C memory model (specifically on the iPhone, so no GC). My background is C/C++/Java, and I am having an issue with the following bit of code (also wondering if I am doing this in an "Objective-C way" or not): - (NSSet *) retrieve { NSMutableSet *set; set = [NSMutableSet new]; // would normally fill the set in here with some data return ([set autorelease]); } - (void) test { NSSet *setA; NSSet *setB; setA = [self retrieve]; setB = [[self retrieve] retain]; [setA release]; [setB release]; } start EDIT Based on comments below, the updated retrieve method: - (NSSet *) retrieve { NSMutableSet *set; set = [[[NSMutableSet alloc] initWithCapacity:100] autorelease]; // would normally fill the set in here with some data return (set); } end EDIT The above code gives a warning for [setA release] "Incorrect decrement of the reference count of an object is not owned at this point by the caller". I though that the "new" set the reference count to 1. Then the "retain" call would add 1, and the "release" call would drop it by 1. Given that wouldn't setA have a reference count of 0 at the end and setB have a reference count of 1 at the end? From what I have figured out by trial and error, setB is correct, and there is no memory leak, but I'd like to understand why that is the case (what is wrong with my understanding of "new", "autorelease", "retain", and "release").

    Read the article

  • Replacing a Namespace with XSLT

    - by er4z0r
    Hi I want to work around a 'bug' in certain RSS-feeds, which use an incorrect namespace for the mediaRSS module. I tried to do it by manipulating the DOM programmatically, but using XSLT seems more flexible to me. Example: <media:thumbnail xmlns:media="http://search.yahoo.com/mrss" url="http://www.suedkurier.de/storage/pic/dpa/infoline/brennpunkte/4311018_0_merkelxI_24280028_original.large-4-3-800-199-0-3131-2202.jpg" /> <media:thumbnail url="http://www.suedkurier.de/storage/pic/dpa/infoline/brennpunkte/4311018_0_merkelxI_24280028_original.large-4-3-800-199-0-3131-2202.jpg" /> Where the namespace must be http://search.yahoo.com/mrss/ (mind the slash). This is my stylesheet: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="//*[namespace-uri()='http://search.yahoo.com/mrss']"> <xsl:element name="{local-name()}" namespace="http://search.yahoo.com/mrss/" > <xsl:apply-templates select="@*|*|text()" /> </xsl:element> </xsl:template> </xsl:stylesheet> Unfortunately the result of the transformation is an invalid XML and my RSS-Parser (ROME Library) does not parse the feed anymore: java.lang.IllegalStateException: Root element not set at org.jdom.Document.getRootElement(Document.java:218) at com.sun.syndication.io.impl.RSS090Parser.isMyType(RSS090Parser.java:58) at com.sun.syndication.io.impl.FeedParsers.getParserFor(FeedParsers.java:72) at com.sun.syndication.io.WireFeedInput.build(WireFeedInput.java:273) at com.sun.syndication.io.WireFeedInput.build(WireFeedInput.java:251) ... 8 more What is wrong with my stylesheet?

    Read the article

  • Google Federated Login vs Hybrid Protocol vs Google Data Authentication. Whats's the Difference?

    - by johnfelix
    Hi, I am trying to implement Google Authentication in my website, in which I would also be pulling some Google Data using the Google Data API and I am using Google App Engine with Jinja2. My question is, so many ways are mentioned to do it. I am confused between Google Federated Login,Google Data Protocol, Hybrid Protocol. Are these things the same or different ways to do the same thing. From what I read and understood, which might be incorrect, Google Federated Login uses the hybrid protocol to authenticate and fetch the google data. Is there a proper guide to implement any one of these in python. Examples which I found at the google link are kind of different. From what I understood,correct me if i am wrong, I have to implement only the OpenID Consumer part. In order to implement Google Federated Login in Python, I saw that we need to download a separate library from the openid-enabled.com but I found a different library for the google data implementation at http://code.google.com/p/gdata-python-client/ As you can see, I am confused a lot :D. Please help me :) Thanks

    Read the article

  • SHA function issues

    - by Damian James
    I have this php code from my login.php if (isset($_POST['logIn'])) { $errmsg = ""; $logname = mysqli_real_escape_string($dbc, trim($_POST['usernameIn'])); $logpassword = mysqli_real_escape_string($dbc, trim($_POST['passwordIn'])); $query = "SELECT user_id, username FROM members WHERE username = '$logname' AND password = SHA('$logpassword')"; $data = mysqli_query($dbc, $query); if (mysqli_num_rows($data) == 1) { $row = mysqli_fetch_array($data); setcookie('user_id', $row['user_id'], time() + (60 * 60 * 24 * 30)); //expires after 30 days setcookie('username', $row['username'], time() + (60 * 60 * 24 * 30)); $home = 'http://' . $_SERVER['HTTP_HOST'] . dirname($_SERVER['PHP_SELF']) . '/index.php'; header('Location: ' . $home); } else { $errmsg = '<p class="errormsg">Username or password is incorrect.</p>'; } } And for some reason, it always ends up setting $errmsg in the else statement. I am sure that I'm entering information (username,password) that is correct and exists in the database. I insert my values (from a signup script) using this query: $query = "INSERT INTO members (username, password, email) VALUES ('$username', SHA('$password'), '$email')"; Anyone see the problem with this script? Thanks!

    Read the article

  • Beginner problems with references to arrays in python 3.1.1

    - by Protean
    As part of the last assignment in a beginner python programing class, I have been assigned a traveling sales man problem. I settled on a recursive function to find each permutation and the sum of the distances between the destinations, however, I am have a lot of problems with references. Arrays in different instances of the Permute and Main functions of TSP seem to be pointing to the same reference. from math import sqrt class TSP: def __init__(self): self.CartisianCoordinates = [['A',[1,1]], ['B',[2,2]], ['C',[2,1]], ['D',[1,2]], ['E',[3,3]]] self.Array = [] self.Max = 0 self.StoredList = ['',0] def Distance(self, i1, i2): x1 = self.CartisianCoordinates[i1][1][0] y1 = self.CartisianCoordinates[i1][1][1] x2 = self.CartisianCoordinates[i2][1][0] y2 = self.CartisianCoordinates[i2][1][1] return sqrt(pow((x2 - x1), 2) + pow((y2 - y1), 2)) def Evaluate(self): temparray = [] Data = [] for i in range(len(self.CartisianCoordinates)): Data.append([]) for i1 in range(len(self.CartisianCoordinates)): for i2 in range(len(self.CartisianCoordinates)): if i1 != i2: temparray.append(self.Distance(i1, i2)) else: temparray.append('X') Data[i1] = temparray temparray = [] self.Array = Data self.Max = len(Data) def Permute(self,varray,index,vcarry,mcarry): #Problem Class array = varray[:] carry = vcarry[:] for i in range(self.Max): print ('ARRAY:', array) print (index,i,carry,array[index][i]) if array[index][i] != 'X': carry[0] += self.CartisianCoordinates[i][0] carry[1] += array[index][i] if len(carry) != self.Max: temparray = array[:] for j in range(self.Max):temparray[j][i] = 'X' index = i mcarry += self.Permute(temparray,index,carry,mcarry) else: return mcarry print ('pass',mcarry) return mcarry def Main(self): out = [] self.Evaluate() for i in range(self.Max): array = self.Array[:] #array appears to maintain the same reference after each copy, resulting in an incorrect array being passed to Permute after the first iteration. print (self.Array[:]) for j in range(self.Max):array[j][i] = 'X' print('I:', i, array) out.append(self.Permute(array,i,[str(self.CartisianCoordinates[i][0]),0],[])) return out SalesPerson = TSP() print(SalesPerson.Main()) It would be greatly appreciated if you could provide me with help in solving the reference problems I am having. Thank you.

    Read the article

  • Mathematical annotations in a PDF file

    - by kvaruni
    I like to annotate papers I read in a digital way. Numerous programs exist to help in this process. For example, on OS X one can use programs such as Skim or even Preview. However, making annotations is dreadful when one wishes to add mathematical annotations, such as formulas or greek letters. A cumbersome "solution" is to select the desired symbol one by one using the Special Characters palette, though this considerably slows down the annotation process. Is there any way to add mathematical annotations to a PDF? The only two limitations that I would impose on a solution is that 1) the mathematical text needs to be selectable, i.e. it must be text and 2) I want to limit the number of programs I need to make the process as painless as possible. Some of the more promising solutions I have tried include generating LaTeX with LaTeXiT, but it seems to be impossible to add a PDF on top of another PDF. Another attempt was to use jsMath to generate the symbols and copy-paste these as annotation using one of the jsMath fonts. This results in unreadable, incorrect characters.

    Read the article

  • Understanding floating point problems

    - by Maxim Gershkovich
    Could someone here please help me understand how to determine when floating point limitations will cause errors in your calculations. For example the following code. CalculateTotalTax = function (TaxRate, TaxFreePrice) { return ((parseFloat(TaxFreePrice) / 100) * parseFloat(TaxRate)).toFixed(4); }; I have been unable to input any two values that have caused for me an incorrect result for this method. If I remove the toFixed(4) I can infact see where the calculations start to lose accuracy (somewhere around the 6th decimal place). Having said that though, my understanding of floats is that even small numbers can sometimes fail to be represented or have I misunderstood and can 4 decimal places (for example) always be represented accurately. MSDN explains floats as such... This means they cannot hold an exact representation of any quantity that is not a binary fraction (of the form k / (2 ^ n) where k and n are integers) Now I assume this applies to all floats (inlcuding those used in javascript). Fundamentally my question boils down to this. How can one determine if any specific method will be vulnerable to errors in floating point operations, at what precision will those errors materialize and what inputs will be required to produce those errors? Hopefully what I am asking makes sense.

    Read the article

  • How Random is System.Guid.NewGuid()? (Take two)

    - by Vilx-
    Before you start marking this as a duplicate, read me out. The other question has a (most likely) incorrect accepted answer. I do not know how .NET generates its GUIDs, probably only Microsoft does, but there's a high chance it simply calls CoCreateGuid(). That function however is documented to be calling UuidCreate(). And the algorithms for creating an UUID are pretty well documented. Long story short, be as it may, it seems that System.Guid.NewGuid() indeed uses version 4 UUID generation algorithm, because all the GUIDs it generates matches the criteria (see for yourself, I tried a couple million GUIDs, they all matched). In other words, these GUIDs are almost random, except for a few known bits. This then again raises the question - how random IS this random? As every good little programmer knows, a pseudo-random number algorithm is only as random as its seed (aka entropy). So what is the seed for UuidCreate()? How ofter is the PRNG re-seeded? Is it cryptographically strong, or can I expect the same GUIDs to start pouring out if two computers accidentally call System.Guid.NewGuid() at the same time? And can the state of the PRNG be guessed if sufficiently many sequentially generated GUIDs are gathered?

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >