Search Results

Search found 8268 results on 331 pages for 'difference'.

Page 268/331 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • HTTP 1.0 vs 1.1

    - by Jason Baker
    Could somebody give me a brief overview of the differences between HTTP 1.0 and HTTP 1.1? I've spent some time with both of the RFCs, but haven't been able to pull out a lot of difference between them. Wikipedia says this: HTTP/1.1 (1997-1999) Current version; persistent connections enabled by default and works well with proxies. Also supports request pipelining, allowing multiple requests to be sent at the same time, allowing the server to prepare for the workload and potentially transfer the requested resources more quickly to the client. But that doesn't mean a lot to me. I realize this is a somewhat complicated subject, so I'm not expecting a full answer, but can someone give me a brief overview of the differences at a bit lower level? By this I mean that I'm looking for the info I would need to know to implement either an HTTP server or application. I realize that this can be a somewhat complicated subject (based on what I know about HTTP as of right now), so I'm not necessarily looking for a full answer. I'm really more looking for a nudge in the right direction so that I can figure it out on my own.

    Read the article

  • Readability and IF-block brackets: best practice

    - by MasterPeter
    I am preparing a short tutorial for level 1 uni students learning JavaScript basics. The task is to validate a phone number. The number must not contain non-digits and must be 14 digits long or less. The following code excerpt is what I came up with and I would like to make it as readable as possible. if ( //set of rules for invalid phone number phoneNumber.length == 0 //empty || phoneNumber.length > 14 //too long || /\D/.test(phoneNumber) //contains non-digits ) { setMessageText(invalid); } else { setMessageText(valid); } A simple question I can not quite answer myself and would like to hear your opinions on: How to position the surrounding (outermost) brackets? It's hard to see the difference between a normal and a curly bracket. Do you usually put the last ) on the same line as the last condition? Do you keep the first opening ( on a line by itself? Do you wrap each individual sub-condition in brackets too? Do you align horizontally the first ( with the last ), or do you place the last ) in the same column as the if? Do you keep ) { on a separate line or you place the last ) on the same line with the last sub-condition and then place the opening { on a new line? Or do you just put the ) { on the same line as the last sub-condition? Community wiki. EDIT Please only post opinions regarding the usage and placement of brackets. The code needs not be re-factored. This is for people who have only been introduced to JavaScript a couple of weeks ago. I am not asking for opinions how to write the code so it's shorter or performs better. I would only like to know how do you place brackets around IF-conditions.

    Read the article

  • Getting svn: E170000: Unrecognized URL scheme for my custom Svn Gradle plugin

    - by Ip Doh
    I wrote a custom gradle plugin using groovy to do basic svn tasks like, Checkout, Clean, Tag etc. The groovy class calls the svn command line client to do these operations, It works fine when i run it on my windows system but the same plugin gives the following error when i run it on a linux system (Centos). svn: E170000: Unrecognized URL scheme for '%22https://source.mycompany.net/svn/MyProject/trunk%22' Am able to make the same calls to the command line client through the command prompt or shell script without any issues. So what is the difference with Here is my code sample String command =String.format("svn co -r %d --non-interactive --trust-server-cert -- username %s --password %s --depth infinity \"%s\" \"%s\"", getRevision(), getUserName(), getUserPassword(), getSrcUrl(), getDir()); Process svnProcess = Runtime.getRuntime().exec(command); BufferedReader stdInput = new BufferedReader(new InputStreamReader(svnProcess.getInputStream())); BufferedReader stdError = new BufferedReader(new InputStreamReader(svnProcess.getErrorStream())); String statusOutputLine ="" while ((statusOutputLine = stdInput.readLine()) != null) { logger.quiet(" " + statusOutputLine); } while (( statusOutputLine = stdError.readLine()) != null) { logger.error(statusOutputLine) throw new Exception(statusOutputLine) } logger.quiet("Successfully Checked out the work space") i do have neon installed on the system -bash-4.1$ svn --version svn, version 1.6.11 (r934486) compiled Jun 25 2011, 11:30:15 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: ra_neon : Module for accessing a repository via WebDAV protocol using Neon. handles 'http' scheme handles 'https' scheme ra_svn : Module for accessing a repository using the svn network protocol. with Cyrus SASL authentication handles 'svn' scheme ra_local : Module for accessing a repository on local disk. handles 'file' scheme

    Read the article

  • RelativeLayout differences between 1.5 and 2.1

    - by Kilnr
    I've got a ListView with items composed of RelativeLayouts. This is the relevant XML from the list items: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="wrap_content"> <TextView android:id="@+id/xx" android:gravity="center_vertical" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_vertical" android:layout_centerInParent="true" android:layout_alignParentLeft="true"/> <TextView android:id="@+id/title" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_toRightOf="@id/xx" /> <TextView android:id="@+id/tag" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_toRightOf="@id/xx" android:layout_below="@id/title" /> <TextView android:id="@+id/subtitle" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_toRightOf="@id/tag" android:layout_below="@id/title" /> </RelativeLayout> On Android 2.1 (tested on a Nexus One), this shows the desired behavior: On Android 1.5 however (tested on a HTC Hero), it shows up like this: [edit] On 1.6 (emulator), it works as expected as well. The small grey line on the top left is what shows up in the first pic as "xx", so that should be vertically centered. As far as I can see, the XML dictates this, but for some reason, 1.5 ignores it. Why is this? I can't find anything about this difference, and I've been brute forcing any combination of layout_center, center, alignParent*, but to no avail... Can anyone shed some light on this? Thanks!

    Read the article

  • Python Locking Implementation (with threading module)

    - by Matty
    This is probably a rudimentary question, but I'm new to threaded programming in Python and am not entirely sure what the correct practice is. Should I be creating a single lock object (either globally or being passed around) and using that everywhere that I need to do locking? Or, should I be creating multiple lock instances in each of the classes where I will be employing them. Take these 2 rudimentary code samples, which direction is best to go? The main difference being that a single lock instance is used in both class A and B in the second, while multiple instances are used in the first. Sample 1 class A(): def __init__(self, theList): self.theList = theList self.lock = threading.Lock() def poll(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.append(something) finally: self.lock.release() class B(threading.Thread): def __init__(self,theList): self.theList = theList self.lock = threading.Lock() self.start() def run(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.remove(something) finally: self.lock.release() if __name__ == "__main__": aList = [] for x in range(10): B(aList) A(aList).poll() Sample 2 class A(): def __init__(self, theList,lock): self.theList = theList self.lock = lock def poll(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.append(something) finally: self.lock.release() class B(threading.Thread): def __init__(self,theList,lock): self.theList = theList self.lock = lock self.start() def run(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.remove(something) finally: self.lock.release() if __name__ == "__main__": lock = threading.Lock() aList = [] for x in range(10): B(aList,lock) A(aList,lock).poll()

    Read the article

  • HP QTP 10: Web-app testing - SomeObj.FireEvent("OnCLick") works, SomeObj.Object.FireEvent("OnCLick") doesn't

    - by Vitaliy
    Hi all! I have rich web app btuil with ExtJS. It has multi-select list control (created with JS+CSS). I want to click on some item in that list using HP QuickTest Pro 10 with Internet Explorer 6. I added that item into Object repository and found that following code works - selects some item: Browser("blah").Page("blah").WebElement("SomeElem").Click next code also works: Browser("blah").Page("blah").WebElement("SomeElem").FireEvent("onMouseDown") Browser("blah").Page("blah").WebElement("SomeElem").FireEvent("onMouseUp") Browser("blah").Page("blah").WebElement("SomeElem").FireEvent("onClick") But I want to select several items using shift+click method. I don't know to do that :( So I have a few questions: How can perform click with mouse on several web elements with Shift key pressed? I tried to do that using CreateEventObject + shiftKey set to true, but the method (perform fireEvent on DOM object, not object from Object repository) doesn't work: Browser("blah").Page("blah").WebElement("SomeElem").Object.FireEvent("onClick") What the difference between WebElement("Element").FireEvent("OnClick") and WebElement("Element").Object.FireEvent("OnClick") ? Plsease, help someone, because I fought with that problem a lot, but had no result. Thanks!

    Read the article

  • PostgreSQL: BYTEA vs OID+Large Object?

    - by mlaverd
    I started an application with Hibernate 3.2 and PostgreSQL 8.4. I have some byte[] fields that were mapped as @Basic (= PG bytea) and others that got mapped as @Lob (=PG Large Object). Why the inconsistency? Because I was a Hibernate noob. Now, those fields are max 4 Kb (but average is 2-3 kb). The PostgreSQL documentation mentioned that the LOs are good when the fields are big, but I didn't see what 'big' meant. I have upgraded to PostgreSQL 9.0 with Hibernate 3.6 and I was stuck to change the annotation to @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType"). This bug has brought forward a potential compatibility issue, and I eventually found out that Large Objects are a pain to deal with, compared to a normal field. So I am thinking of changing all of it to bytea. But I am concerned that bytea fields are encoded in Hex, so there is some overhead in encoding and decoding, and this would hurt the performance. Are there good benchmarks about the performance of both of these? Anybody has made the switch and saw a difference?

    Read the article

  • How to improve Java performance on Informix for Windows

    - by Michal Niklas
    I have problem with performance of Java UDR functions on Informix on Windows. On this server I already have some functions in C and SPL. I chose one function to write it in those 3 languages and I measured performance of this function on test table. Function calculates some kind of checksum so it does not use any db libraries etc. only string and math operations. I observed performance on 30k records with SQL like: select function(txt) from _tmp_perf_test and I changed function to 'function_c, function_spl or function_java. My performance tests showed that C function is the fastest, SPL function is about 5 times slower, where Java is 100 (one hundred!) times slower than C. I checked it few times and 1:100 ratio didn't improve. I changed Java function to simply return length of the string but even this do not help so it looks, that there is general problem with Java function invocation, because there was no difference in time between Java function that calculate checksum and Java function that returns length of the string. I increased JVM_MAX_HEAP_SIZE to 128 and it not helped too. I use IBM Informix Dynamic Server Version 11.50.TC6DE. The same test on Linux server: IBM Informix Dynamic Server Version 11.50.FC6 show more "normal" results, i.e. Java is slower from C and SPL but only 2 to 5 times. What can I do to improve Java performance on Informix server on Windows? More info about Java on servers: c:\Informix\extend\krakatoa\jre\bin>java -version java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build pwi32dev-20081129a (SR9-0 )) IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Windows Server 2003 x86-32 j9vmwi3223-20081129 (JIT enabled) J9VM - 20081126_26240_lHdSMr JIT - 20081112_1511ifx1_r8 GC - 200811_07) JCL - 20081129 [root@informix11 bin]# ./java -version java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build pxa64devifx-20071025 (SR6b)) IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Linux amd64-64 j9vmxa6423-20071005 (JIT enabled) J9VM - 20071004_14218_LHdSMr JIT - 20070820_1846ifx1_r8 GC - 200708_10) JCL - 20071025

    Read the article

  • Microsoft OLE DB Provider for SQL Server error '80040e14' Could not find stored procedure

    - by BBlake
    I am migrating a classic ASP web app to new servers. The database back end is migrating from SQL Server 2000 to SQL Server 2008, and the app is moving from Win2000 x86 to Win2003R2 x64. I am getting the above error on every single stored procedure call within the application. I have verified: Yes, the SQL user is set up, using correct username and password Yes, the SQL user has execute permissions on the stored procedures in the database Yes, I have updated the TypeLib references to the new UUID Yes, I have logged into the database via SSMS with the SQL user id and it can see and execute the stored procedures just fine in SSMS, but not from the web app. Yes, the SQL user has the database set as its default database. The most frustrating thing is it works fine on the DEV server, but not on the production server. I have gone through every IIS setting 5 or 6 times and the web app is set up precisely the same in both environments. The only difference is the database server name in the connection string (DEV vs prod) EDIT: I have also tried pointing the prod web box at the dev database server and get the same error so I'm fairly sure the issue isn't on the database side.

    Read the article

  • iPhone OpenGL and NSTimer issues

    - by Kyle
    I have an NSTimer that runs at 60hz. With an OpenGL scene loaded and rendering, my game can get 60fps, solid, all day long.. Then if I go and recompile the app, or reload it, it will get 40fps. Same resources loaded. I've been running into this problem for years, and I just want to know why. It's crazy, and I want to know if I should just abandon this stupid Timer. Conditions are not different on my 3GS between loads. It will just get 40fps sometimes. Obviously the clockrate is not different between loads, so the performance figures should be constant given a constant scene. Here is a log of my framerates: A good load: :-) FrameRate: 61 FrameRate: 61 FrameRate: 61 FrameRate: 60 FrameRate: 60 FrameRate: 61 FrameRate: 60 FrameRate: 60 FrameRate: 61 FrameRate: 60 FrameRate: 61 Now, I'll go ahead and do nothing, recompile, and run: FrameRate: 43 FrameRate: 50 FrameRate: 45 FrameRate: 48 FrameRate: 40 FrameRate: 45 FrameRate: 42 FrameRate: 41 FrameRate: 42 FrameRate: 44 FrameRate: 41 FrameRate: 46 ^- Massive difference visually. What the flying heck could cause this? SAME area of the scene, SAME camera setup. No variables are different.

    Read the article

  • Windows console

    - by b-gen-jack-o-neill
    Hello. Well, I have a simple question, at least I hope its simple. I was interested in win32 console for a while. Our teacher told us, that windows console is just for DOS and real mode emulation purposes. Well, I know it is not true, becouse DOS applications are runned by emulator which only uses console to display output. Another thing I learned is that console is built into Windows since NT. Well. But what I could not find is, how actually are console programs written to use console. I use Visual C++ for programming (well, for learning). So, the only thing I need to do for using console is select console project. I first thought that windows decides wheather it run app in console or tries to run app in window mode. So I created win32 program and tried printf(). Well, I could not compile it. I know that by definition printf() prints text or variables to stdout. I also found that stdout is the console interface for output. But, I could not find what actually stdout is. So, basicly what I want to ask is, where is the difference between console app and win32 app. I thought that windows starts console when it gets command from "console-family" functions. But obvisously it does not, so there must be some code that actually commands windows to create console interface. And the second question is, when the console is created, how does windows recognize which console terminal is used for what app? I mean, what actually is stdout? Is it a area in memory , or some windows routine that is called? Thanks.

    Read the article

  • How to correctly relay TCP traffic between sockets?

    - by flukes1
    I'm trying to write some Python code that will establish an invisible relay between two TCP sockets. My current technique is to set up two threads, each one reading and subsequently writing 1kb of data at a time in a particular direction (i.e. 1 thread for A to B, 1 thread for B to A). This works for some applications and protocols, but it isn't foolproof - sometimes particular applications will behave differently when running through this Python-based relay. Some even crash. I think that this is because when I finish performing a read on socket A, the program running there considers its data to have already arrived at B, when in fact I - the devious man in the middle - have yet to send it to B. In a situation where B isn't ready to receive the data (whereby send() blocks for a while), we are now in a state where A believes it has successfully sent data to B, yet I am still holding the data, waiting for the send() call to execute. I think this is the cause of the difference in behaviour that I've found in some applications, while using my current relaying code. Have I missed something, or does that sound correct? If so, my real question is: is there a way around this problem? Is it possible to only read from socket A when we know that B is ready to receive data? Or is there another technique that I can use to establish a truly 'invisible' two-way relay between [already open & established] TCP sockets?

    Read the article

  • C# Regex stops after first line matched

    - by JD Guzman
    Ok so I have a regex and I need it to find matches in a multiline string. This is the string I am using: Device Identifier: disk0 Device Node: /dev/disk0 Part of Whole: disk0 Device / Media Name: OCZ-VERTEX2 Media Volume Name: Not applicable (no file system) Mounted: Not applicable (no file system) File System: None Content (IOContent): GUID_partition_scheme OS Can Be Installed: No Media Type: Generic Protocol: SATA SMART Status: Verified Total Size: 240.1 GB (240057409536 Bytes) (exactly 468862128 512-Byte-Blocks) Volume Free Space: Not applicable (no file system) Device Block Size: 512 Bytes Read-Only Media: No Read-Only Volume: Not applicable (no file system) Ejectable: No Whole: Yes Internal: Yes Solid State: Yes OS 9 Drivers: No Low Level Format: Not supported Basically I need to separate each line into two groups with the colon as the separator. The regex I am using is: @"([A-Za-z0-9\(\) \-\/]+):([A-Za-z0-9\(\) \-\/]+).*" It does work but only picks up the first line and separates it into the two groups like I want but it stops at that point. I have tried the Multiline option but it doesn't make any difference. I must admit I am new to the regex world. Any help is appreciated.

    Read the article

  • applicationWillTerminate Appears to be Inconsistent

    - by Lauren Quantrell
    This one has me batty. In applicationWillTerminate I am doing two things: saving some settings to the app settings plist file and updating any changed data to the SQLite database referenced in the managedObjectContext. Problem is it works sometimes and not others. Same issue in the simulator and on the device. If I hit the home button while the app is running, I can only sometimes get the data to store in the plist and into the CoreData store. It seems that it's both works or neither works, and it makes no difference if I switch the execution order (saveState, managedObjectContext or managedObjectContext, saveState). I can't figure out how this can happen. Any help is greatly appreciated. lq AppDelegate.m @synthesize rootViewController; - (void)applicationWillTerminate:(UIApplication *)application { [rootViewController saveState]; NSError *error; if (managedObjectContext != nil) { if ([managedObjectContext hasChanges] && ![managedObjectContext save:&error]) { // Handle error exit(-1); // Fail } } } RootViewController.m - (void)saveState { NSUserDefaults *userDefaults = [NSUserDefaults standardUserDefaults]; [userDefaults setInteger:self.someInteger forKey:kSomeNumber]; [userDefaults setObject:self.someArray forKey:kSomeArray]; }

    Read the article

  • What is wrong with this mail header text?

    - by dnagirl
    The following $header is being sent via PHP's mail($to,$subject,$content,$header) command. The mail arrives and appears to have an attachment. But the mail text is empty as is the file. I think this has something to do with line spacing but I can't see the problem. I have tried putting the contents (between the boundaries) in $contents rather than appending it to $header. It doesn't make a difference. Any thoughts? From: [email protected] Reply-To: [email protected] X-Mailer: PHP 5.3.1 MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="7425195ade9d89bc7492cf520bf9f33a" --7425195ade9d89bc7492cf520bf9f33a Content-type:text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 7bit this is a test message. --7425195ade9d89bc7492cf520bf9f33a Content-Type: application/pdf; name="test.pdf" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="test.pdf" JVBERi0xLjMKJcfsj6IKNSAwIG9iago8PC9MZW5ndGggNiAwIFIvRmlsdGVyIC9GbGF0ZURlY29k ZT4+CnN0cmVhbQp4nE2PT0vEMBDFadVdO4p/v8AcUyFjMmma5CqIIF5cctt6WnFBqLD1+4Np1nY3 c3lvfm+GyQ4VaUY11iQ2PTyuHG5/Ibdx9fIvhi3swJMZX24c602PTzENegwUbNAO4xcoCsG5xuWE . ... the rest of the file . MDAwMDA2MDYgMDAwMDAgbiAKMDAwMDAwMDcwNyAwMDAwMCBuIAowMDAwMDAxMDY4IDAwMDAwIG4g CjAwMDAwMDA2NDcgMDAwMDAgbiAKMDAwMDAwMDY3NyAwMDAwMCBuIAowMDAwMDAxMjg2IDAwMDAw IG4gCjAwMDAwMDA5MzIgMDAwMDAgbiAKdHJhaWxlcgo8PCAvU2l6ZSAxNCAvUm9vdCAxIDAgUiAv SW5mbyAyIDAgUgovSUQgWzxEMURDN0E2OUUzN0QzNjI1MDUyMEFFMjU0MTMxNTQwQz48RDFEQzdB NjlFMzdEMzYyNTA1MjBBRTI1NDEzMTU0MEM+XQo+PgpzdGFydHhyZWYKNDY5MwolJUVPRgo= --7425195ade9d89bc7492cf520bf9f33a-- $header ends without a line break

    Read the article

  • Django upload failing on request data read error

    - by Jake
    Hi All, I've got a Django app that accepts uploads from jQuery uploadify, a jQ plugin that uses flash to upload files and give a progress bar. Files under about 150k work, but bigger files always fail and almost always at around 192k (that's 3 chunks) completed, sometimes at around 160k. The Exception I get is below. exceptions.IOError request data read error File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 171, in _get_post self._load_post_and_files() File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 137, in _load_post_and_files self._post, self._files = self.parse_file_upload(self.META, self.environ[\'wsgi.input\']) File "/usr/lib/python2.4/site-packages/django/http/__init__.py", line 124, in parse_file_upload return parser.parse() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 192, in parse for chunk in field_stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 468, in next for bytes in stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 375, in next data = self.flo.read(self.chunk_size) File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 405, in read return self._file.read(num_bytes) When running locally on the Django development server, big files work. I've tried setting my FILE_UPLOAD_HANDLERS = ("django.core.files.uploadhandler.TemporaryFileUploadHandler",) in case it was the memory upload handler, but it made no difference. Does anyone know how to fix this?

    Read the article

  • MS Source Server - source stream is apparently not there when viewing with srctool

    - by Tim Peel
    Hi, I have been playing around with the MS Source Server stuff in the MS Debugging Tools install. At present, I am running my code/pdbs through the Subversion indexing command, which is now running as expected. It creates the stream for a given pdb file and writes it to the pdb file. However when I use that DLL and associated pdb in visual studio 2008, it says the source code cannot be retrieved. If I check the pdb against srctool is says none of the source files contained are indexed, which is very strange as the process prior ran fine. If I check the stream that was generated from the svnindex.cmd run for the pdb, srctool says all source files are indexed. Why would there be a difference? I have opened the pdb file in a text editor and I can see the original references to the source files on my machine (also under the srcsrv header name) and the new "injected" source server links to my subversion repository). Should both references still exist in the pdb? I would have expected one to be removed? Either way, visual studio 2008 will not pick up my source references so I am a bit lost as to what to try next. As far as I can tell, I have done everything I should have. Anyone have similar experiences? Many thanks.

    Read the article

  • What is session management in Java ?

    - by Sarang
    I have faced this question in my Interview as well. I do have many confusion with Session Scope & it management in java. In web.xml we do have the entry : <session-config> <session-timeout> 30 </session-timeout> </session-config> What does it indicate actually ? Is is scope of whole project ? Another point confusing me is how can we separate the session scope of multiple request in the same project? Means if I am logging in from a PC & at the same time I am logging in from another PC, does it differentiate it ? Also, another confusing thing is the browser difference. Why does the different Gmails possible to open in different browsers ? And Gmail can prevent a session from Login to Logout. How is it maintained with our personal web ?

    Read the article

  • How to find Part Time development/IT work?

    - by Jonathan
    I've been working in the IT field now for 10 years. Originally trained as an Engineer, started out with C++ and have been a .Net specialist since beta. Currently seconded to a major city and working in the finance industry as freelance, I really feel like i've hit the glass ceiling. Have been contracting now for 5 years as the company politics and frustration of not being promoted and poor pay rises for excellent work but during the last decade of corporate cost cutting took its toll on my morale. Freelance made all the difference and i've had a very decorated career for good clients. What any Engineering student could ever dream of when starting out. The problem is, it doesn't particularly make me happy. It's good work, and i enjoy the problem solving aspects of it and having something to do each day. However there is always a large overhead of non-technical work and dealing with poor managers etc. I guess the Engineering was always a bit of a mistake i made the best out of, and now having 10 years behind a computer hasn't done wonders for my health or eye sight. In a nutshell i am in the process of retraining as a therapist and would like to open my own clinic. However, never having done this before, the fast pace IT skills outdate and the fact that all my experience and skills are non transferrable, i am a little worried. Any ideas how i can find part time IT work as i build up my business? (it's incredibly hard to find freelancing work that doesn't require long hours and overtime). Or other ideas to make the transition easier, and perhaps backout if it financially doesn't work/or i have enough marketing skills? I'd be interested to hear from people who have made a similar transition, successfully or unsuccessfully. Many thanks

    Read the article

  • Source of UIView Implicit Animation delay?

    - by iPhoneToucher
    I have a block of UIView animation code that looks like this: [UIView beginAnimations:@"pushView" context:nil]; [UIView setAnimationDelay:0]; [UIView setAnimationDuration:.5]; [UIView setAnimationDelegate:self]; [UIView setAnimationWillStartSelector:@selector(animationWillStart)]; view.frame = CGRectMake(0, 0, 320, 416); [UIView commitAnimations]; The code basically mimics the animation of a ModalView presentation and is tied to a button on my interface. When the button is pressed, I get a long (.5 sec) delay (on iPod Touch...twice as fast on iPhone 3GS) before the animationWillStart: actually gets called. My app has lots going on besides this, but I've timed various points of my code and the delay definitely occurs at this block. In other words, a timestamp immediately before this code block and a timestamp when animationWillStart: gets called shows a .5 sec difference. I'm not too experienced with Core Animation and I'm just trying to figure out what the cause of the delay is...Memory use is stable when the animation starts and CoreAnimation FPS seems to be fine in Instruments. The view that gets animated does have upwards of 20 total subviews, but if that were the issue wouldn't it cause choppiness after the animation starts, rather than before? Any ideas?

    Read the article

  • checkbox like radiobutton wpf c#

    - by rockenpeace
    i have investigated this problem but this is solved in design view and code-behind. but my problem is little difference: i try to do this as only code-behind because my checkboxes are dynamically created according to database data.In other words, number of my checkboxes is not stable. i want to check only one checkbox in group of checkboxes. when i clicked one checkbox,i want that ischecked property of other checkboxes become false.this is same property in radiobuttons. i take my checkboxes from a stackpanel in xaml side: <StackPanel Margin="4" Orientation="Vertical" Grid.Row="1" Grid.Column="1" Name="companiesContainer"> </StackPanel> my xaml.cs: using (var c = new RSPDbContext()) { var q = (from v in c.Companies select v).ToList(); foreach (var na in q) { CheckBox ch = new CheckBox(); ch.Content = na.Name; ch.Tag = na; companiesContainer.Children.Add(ch); } } foreach (object i in companiesContainer.Children) { CheckBox chk = (CheckBox)i; chk.SetBinding(ToggleButton.IsCheckedProperty, "DataItem.IsChecked"); } how can i provide this property in checkboxes in xaml.cs ? thanks in advance..

    Read the article

  • UITableViewCells of different heights place their accessoryViews at different X positions

    - by Jasarien
    Hey Guys, My app has some table cells that vary in height. The cells can also have a UIButton set to be a detail disclosure button (round, blue with arrow) as their accessory view. Depending on the height of the cell, the accessory view is positioned differently. At first I thought it was my layout code for my cell that was causing the problem, so I set up a quick independent test that uses vanilla UITableCells to remove the possibility that it could be my fault. I set up a view in interface builder, and just added a view table cells to the view, set their heights to different values and then added a detail disclosure button to each. Nothing more, nothing less. This is what I see: I added the size guides (thanks to Xscope) so you can see the difference in the accessory view x positions. The heights are: top 37px mid 68px bottom 44px (default, untouched height) If I increase the height any heigher than 68px the accessory view doesn't move any further to the left. Is this a bug? Is there any way I can prevent this from happening?

    Read the article

  • How do I copy or move an NSManagedObject from one context to another?

    - by Aeonaut
    I have what I assume is a fairly standard setup, with one scratchpad MOC which is never saved (containing a bunch of objects downloaded from the web) and another permanent MOC which persists objects. When the user selects an object from scratchMOC to add to her library, I want to either 1) remove the object from scratchMOC and insert into permanentMOC, or 2) copy the object into permanentMOC. The Core Data FAQ says I can copy an object like this: NSManagedObjectID *objectID = [managedObject objectID]; NSManagedObject *copy = [context2 objectWithID:objectID]; (In this case, context2 would be permanentMOC.) However, when I do this, the copied object is faulted; the data is initially unresolved. When it does get resolved, later, all of the values are nil; none of the data (attributes or relationships) from the original managedObject are actually copied or referenced. Therefore I can't see any difference between using this objectWithID: method and just inserting an entirely new object into permanentMOC using insertNewObjectForEntityForName:. I realize I can create a new object in permanentMOC and manually copy each key-value pair from the old object, but I'm not very happy with that solution. (I have a number of different managed objects for which I have this problem, so I don't want to have to write and update copy: methods for all of them as I continue developing.) Is there a better way?

    Read the article

  • How do i refactor this code by using Action<t> or Func<t> delegates

    - by user330612
    I have a sample program, which needs to execute 3 methods in a particular order. And after executing each method, should do error handling. Now i did this in a normal fashion, w/o using delegates like this. class Program { public static void Main() { MyTest(); } private static bool MyTest() { bool result = true; int m = 2; int temp = 0; try { temp = Function1(m); } catch (Exception e) { Console.WriteLine("Caught exception for function1" + e.Message); result = false; } try { Function2(temp); } catch (Exception e) { Console.WriteLine("Caught exception for function2" + e.Message); result = false; } try { Function3(temp); } catch (Exception e) { Console.WriteLine("Caught exception for function3" + e.Message); result = false; } return result; } public static int Function1(int x) { Console.WriteLine("Sum is calculated"); return x + x; } public static int Function2(int x) { Console.WriteLine("Difference is calculated "); return (x - x); } public static int Function3(int x) { return x * x; } } As you can see, this code looks ugly w/ so many try catch loops, which are all doing the same thing...so i decided that i can use delegates to refactor this code so that Try Catch can be all shoved into one method so that it looks neat. I was looking at some examples online and couldnt figure our if i shud use Action or Func delegates for this. Both look similar but im unable to get a clear idea how to implement this. Any help is gr8ly appreciated. I'm using .NET 4.0, so im allowed to use anonymous methods n lambda expressions also for this Thanks

    Read the article

  • Why This Maintainability Index Increase?

    - by Timothy
    I would be appreciative if someone could explain to me the difference between the following two pieces of code in terms of Visual Studio's Code Metrics rules. Why does the Maintainability Index increase slightly if I don't encapsulate everything within using ( )? Sample 1 (MI score of 71) public static String Sha1(String plainText) { using (SHA1Managed sha1 = new SHA1Managed()) { Byte[] text = Encoding.Unicode.GetBytes(plainText); Byte[] hashBytes = sha1.ComputeHash(text); return Convert.ToBase64String(hashBytes); } } Sample 2 (MI score of 73) public static String Sha1(String plainText) { Byte[] text, hashBytes; using (SHA1Managed sha1 = new SHA1Managed()) { text = Encoding.Unicode.GetBytes(plainText); hashBytes = sha1.ComputeHash(text); } return Convert.ToBase64String(hashBytes); } I understand metrics are meaningless outside of a broader context and understanding, and programmers should exercise discretion. While I could boost the score up to 76 with return Convert.ToBase64String(sha1.ComputeHash(Encoding.Unicode.GetBytes(plainText))), I shouldn't. I would clearly be just playing with numbers and it isn't truly any more readable or maintainable at that point. I am curious though as to what the logic might be behind the increase in this case. It's obviously not line-count.

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >