Search Results

Search found 7325 results on 293 pages for 'windows7 32'.

Page 240/293 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • Is it a good idea to use an integer column for storing US ZIP codes in a database?

    - by Yadyn
    From first glance, it would appear I have two basic choices for storing ZIP codes in a database table: Text (probably most common), i.e. char(5) or varchar(9) to support +4 extension Numeric, i.e. 32-bit integer Both would satisfy the requirements of the data, if we assume that there are no international concerns. In the past we've generally just gone the text route, but I was wondering if anyone does the opposite? Just from brief comparison it looks like the integer method has two clear advantages: It is, by means of its nature, automatically limited to numerics only (whereas without validation the text style could store letters and such which are not, to my knowledge, ever valid in a ZIP code). This doesn't mean we could/would/should forgo validating user input as normal, though! It takes less space, being 4 bytes (which should be plenty even for 9-digit ZIP codes) instead of 5 or 9 bytes. Also, it seems like it wouldn't hurt display output much. It is trivial to slap a ToString() on a numeric value, use simple string manipulation to insert a hyphen or space or whatever for the +4 extension, and use string formatting to restore leading zeroes. Is there anything that would discourage using int as a datatype for US-only ZIP codes?

    Read the article

  • identify accounts using zend-openid [HELP]

    - by tawfekov
    Hi! , i had successfully implemented simple open ID authentication with the [gmail , yahoo , myopenid, aol and many others ] exactly as stackoverflow do but i had a problem with identifying accounts for example 'openid_ns' => string 'http://specs.openid.net/auth/2.0' (length=32) 'openid_mode' => string 'id_res' (length=6) 'openid_op_endpoint' => string 'https://www.google.com/accounts/o8/ud' (length=37) 'openid_response_nonce' => string '2010-05-02T06:35:40Z9YIASispxMvKpw' (length=34) 'openid_return_to' => string 'http://dev.local/download/index/gettingback' (length=43) 'openid_assoc_handle' => string 'AOQobUf_2jrRKQ4YZoqBvq5-O9sfkkng9UFxoE9SEBlgWqJv7Sq_FYgD7rEXiLLXhnjCghvf' (length=72) 'openid_signed' => string 'op_endpoint,claimed_id,identity,return_to,response_nonce,assoc_handle' (length=69) 'openid_sig' => string 'knieEHpvOc/Rxu1bkUdjeE9YbbFX3lqZDOQFxLFdau0=' (length=44) 'openid_identity' => string 'https://www.google.com/accounts/o8/id?id=AItOawk0NAa51T3xWy-oympGpxFj5CftFsmjZpY' (length=80) 'openid_claimed_id' => string 'https://www.google.com/accounts/o8/id?id=AItOawk0NAa51T3xWy-oympGpxFj5CftFsmjZpY' (length=80) i thought that openid_claimed_id or openid_identity is identical for every account but after some testing turns out the claimed id is changeable by the users them self in yahoo and randomly changing in case of gmail FYI am using zend openid with a bit modification on consumer class and i can't use SERG [Simple Refistration Extension] cuz currently it uses openid v1 specification :( i hadn't read the specification of openid so if any body know please help me to how to identify openid accounts to know if this account has signed in before or not

    Read the article

  • How to write this snippet in Python?

    - by morpheous
    I am learning Python (I have a C/C++ background). I need to write something practical in Python though, whilst learning. I have the following pseudocode (my first attempt at writing a Python script, since reading about Python yesterday). Hopefully, the snippet details the logic of what I want to do. BTW I am using python 2.6 on Ubuntu Karmic. Assume the script is invoked as: script_name.py directory_path import csv, sys, os, glob # Can I declare that the function accepts a dictionary as first arg? def getItemValue(item, key, defval) return !item.haskey(key) ? defval : item[key] dirname = sys.argv[1] # declare some default values here weight, is_male, default_city_id = 100, true, 1 # fetch some data from a database table into a nested dictionary, indexed by a string curr_dict = load_dict_from_db('foo') #iterate through all the files matching *.csv in the specified folder for infile in glob.glob( os.path.join(dirname, '*.csv') ): #get the file name (without the '.csv' extension) code = infile[0:-4] # open file, and iterate through the rows of the current file (a CSV file) f = open(infile, 'rt') try: reader = csv.reader(f) for row in reader: #lookup the id for the code in the dictionary id = curr_dict[code]['id'] name = row['name'] address1 = row['address1'] address2 = row['address2'] city_id = getItemValue(row, 'city_id', default_city_id) # insert row to database table finally: f.close() I have the following questions: Is the code written in a Pythonic enough way (is there a better way of implementing it)? Given a table with a schema like shown below, how may I write a Python function that fetches data from the table and returns is in a dictionary indexed by string (name). How can I insert the row data into the table (actually I would like to use a transaction if possible, and commit just before the file is closed) Table schema: create table demo (id int, name varchar(32), weight float, city_id int); BTW, my backend database is postgreSQL

    Read the article

  • mmap only needed pages of kernel buffer to user space

    - by axeoth
    See also this answer: http://stackoverflow.com/a/10770582/1284631 I need something similar, but without having to allocate a buffer: the buffer is large, in theory, but the user space program only needs to access some parts of it (it mocks some registers of a hardware). As I cannot allocate with vmalloc_user() such a large buffer (kernel 32 bit, in embedded environment, no swap...), I followed the same approach as in the quoted answer, trying to allocate only those pages that are really requested by the user space. So: I use a my_mmap() function for the device file (actually, is the .mmap field of a struct uio_info) to set up the fields of the vma, then, in the vm_area_struct's .fault field (also named my_fault()), I should return a page. except that: In the my_fault() method of vm_area_struct, I cannot obtain a page through: vmf->page=vmalloc_to_page(my_buf + (vmf->pgoff << PAGE_SHIFT)); since there is no allocated buffer: my_buf = vmalloc_user(MY_BUF_SIZE); fails with "allocation failed: out of vmalloc space - use vmalloc= to increase size." (and there is no room or swap to increase that vmalloc= parameter). So, I would need to get a page from the kernel and fill the vmf->page field. How to allocate a page (I assume that the offset of the page is known, as it is vm->pgoff). What base memory should I use instead of my_buf? PS: I also did set up the vma->flags |= VM_NORESERVE; (in the my_mmap()), but not sure if it helps. Is there any vmalloc_user_unreserved()-like function? (let's say, lazy allocation) Also, writing 1 to /proc/sys/vm/overcommit_memory and large values (eg 500) to /proc/sys/vm/overcommit_ratio before trying to my_buf=vmalloc_user(<large_size>) didn't work.

    Read the article

  • Using an SHA1 with Micrsoft CAPI

    - by Erik Jõgi
    Hello, I have an SHA1 hash and I need to sign it. The CryptSignHash() method requires a HCRYPTHASH handle for signing. I create it and as I have the actual hash value already then set it: CryptCreateHash(cryptoProvider, CALG_SHA1, 0, 0, &hash); CryptSetHashParam(hash, HP_HASHVAL, hashBytes, 0); The hashBytes is an array of 20 bytes. However the problem is that the signature produced from this HCRYPTHASH handle is incorrect. I traced the problem down to the fact that CAPI actually doesn't use all 20 bytes from my hashBytes array. For some reason it thinks that SHA1 is only 4 bytes. To verify this I wrote this small program: HCRYPTPROV cryptoProvider; CryptAcquireContext(&cryptoProvider, NULL, NULL, PROV_RSA_FULL, 0); HCRYPTHASH hash; HCRYPTKEY keyForHash; CryptCreateHash(cryptoProvider, CALG_SHA1, keyForHash, 0, &hash); DWORD hashLength; CryptGetHashParam(hash, HP_HASHSIZE, NULL, &hashLength, 0); printf("hashLength: %d\n", hashLength); And this prints out hashLength: 4 ! Can anyone explain what I am doing wrong or why Microsoft CAPI thinks that SHA1 is 4 bytes (32 bits) instead of 20 bytes (160 bits). Thank you.

    Read the article

  • PHP5 getrusage() returning incorrect information?

    - by Andrew
    I'm trying to determine CPU usage of my PHP scripts. I just found this article which details how to find system and user CPU usage time (Section 4). However, when I tried out the examples, I received completely different results. The first example: sleep(3); $data = getrusage(); echo "User time: ". ($data['ru_utime.tv_sec'] + $data['ru_utime.tv_usec'] / 1000000); echo "System time: ". ($data['ru_stime.tv_sec'] + $data['ru_stime.tv_usec'] / 1000000); Results in: User time: 29.53 System time: 2.71 Example 2: for($i=0;$i<10000000;$i++) { } // Same echo statements Results: User time: 16.69 System time: 2.1 Example 3: $start = microtime(true); while(microtime(true) - $start < 3) { } // Same echo statements Results: User time: 34.94 System time: 3.14 Obviously, none of the information is correct except maybe the system time in the third example. So what am I doing wrong? I'd really like to be able to use this information, but it needs to be reliable. I'm using Ubuntu Server 8.04 LTS (32-bit) and this is the output of php -v: PHP 5.2.4-2ubuntu5.10 with Suhosin-Patch 0.9.6.2 (cli) (built: Jan 6 2010 22:01:14) Copyright (c) 1997-2007 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2007 Zend Technologies

    Read the article

  • Flex profiling - what is [enterFrameEvent] doing?

    - by Herms
    I've been tasked with finding (and potentially fixing) some serious performance problems with a Flex application that was delivered to us. The application will consistently take up 50 to 100% of the CPU at times when it is simply idling and shouldn't be doing anything. My first step was to run the profiler that comes with FlexBuilder. I expected to find some method that was taking up most of the time, showing me where the bottleneck was. However, I got something unexpected. The top 4 methods were: [enterFrameEvent] - 84% cumulative, 32% self time [reap] - 20% cumulative and self time [tincan] - 8% cumulative and self time global.isNaN - 4% cumulative and self time All other methods had less than 1% for both cumulative and self time. From what I've found online, the [bracketed methods] are what the profiler lists when it doesn't have an actual Flex method to show. I saw someone claim that [tincan] is the processing of RTMP requests, and I assume [reap] is the garbage collector. Does anyone know what [enterFrameEvent] is actually doing? I assume it's essentially the "main" function for the event loop, so the high cumulative time is expected. But why is the self time so high? What's actually going on? I didn't expect the player internals to be taking up so much time, especially since nothing is actually happening in the app (and there are no UI updates going on). Is there any good way to find dig into what's happening? I know something is going on that shouldn't be (it looks like there must be some kind of busy wait or other runaway loop), but the profiler isn't giving me any results that I was expecting. My next step is going to be to start adding debug trace statements in various places to try and track down what's actually happening, but I feel like there has to be a better way.

    Read the article

  • Error copying file from app bundle

    - by Michael Chen
    I used the FireFox add-on SQLite Manager, created a database, which saved to my desktop as "DB.sqlite". I copied the file into my supporting files for the project. But when I run the app, immediately I get the error "Assertion failure in -[AppDelegate copyDatabaseIfNeeded], /Users/Mac/Desktop/Note/Note/AppDelegate.m:32 2014-08-19 23:38:02.830 Note[28309:60b] Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Failed to create writable database file with message 'The operation couldn’t be completed. (Cocoa error 4.)'.' First throw call stack: "... Here is the App Delegate Code where the error takes place -(void) copyDatabaseIfNeeded { NSFileManager *fileManager = [NSFileManager defaultManager]; NSError *error; NSString *dbPath = [self getDBPath]; BOOL success = [fileManager fileExistsAtPath:dbPath]; if (!success) { NSString *defaultDBPath = [[ [NSBundle mainBundle ] resourcePath] stringByAppendingPathComponent:@"DB.sqlite"]; success = [fileManager copyItemAtPath:defaultDBPath toPath:dbPath error:&error]; if (!success) NSAssert1(0, @"Failed to create writable database file with message '%@'.", [error localizedDescription]); } } I am very new to Sqlite, so I maybe I didn't create a database correctly in the FireFox Sqlite manager, or maybe I didn't "properly" copy the .sqlite file in? (I did check the target membership in the sqlite and it correctly has my project selected. Also, the .sqlite file names all match up perfectly.)

    Read the article

  • Replace some column values depending on a condition, MATLAB

    - by darkcminor
    I have a matrix like A= 4.0000 120.0000 92.0000 0 0 37.6000 0.1910 30.0000 10.0000 168.0000 74.0000 0 0 38.0000 0.5370 34.0000 10.0000 139.0000 80.0000 0 0 27.1000 1.4410 57.0000 1.0000 139.0000 60.0000 23.0000 846.0000 30.1000 0.3980 59.0000 5.0000 136.0000 72.0000 19.0000 175.0000 25.8000 0.5870 51.0000 7.0000 121.0000 0 0 0 30.0000 0.4840 32.0000 I want to replace the values of the first column that are greater than '5' to 0, also in the second column if the values are in range 121-130 replace them by 0, if they are in range 131-140 replece by 1, 141-150 by 2, 151-160 by 3... so the result matrix would be A= 4.0000 0.0000 92.0000 0 0 37.6000 0.1910 30.0000 0.0000 4.0000 74.0000 0 0 38.0000 0.5370 34.0000 0.0000 1.0000 80.0000 0 0 27.1000 1.4410 57.0000 1.0000 1.0000 60.0000 23.0000 846.0000 30.1000 0.3980 59.0000 5.0000 1.0000 72.0000 19.0000 175.0000 25.8000 0.5870 51.0000 0.0000 0.0000 0 0 0 30.0000 0.4840 32.0000 How to acomplish this? I was trying something like counter=1; for i = 1: rows if A(i,1) > 5 A(i ,1) = 0; end if A(i,2) > 120 && A(i,2) < 130 A(i ,2) = 0; end counter = counter+1; end Using a case could do the trick?

    Read the article

  • Change default java installation

    - by user1501700
    I have many Java versions installed on a Windows 7 machine. Some of them are 32 bits, some are 64 bits. Now as default it starts one of those last versions (1.7 64 bits). How do I tell my Windows 7 machine to use another version of Java? One of the reasons is that I'm developing a JNI project from Microsoft Visual Studio C++ - it uses also java 1.7 64 bits. Best regards, Andrej I have set: User variable: JAVA_HOME=C:\j2sdk1.4.2_04 PATH=%JAVA_HOME%\bin;%PATH% and system variable: JAVA_HOME=C:\j2sdk1.4.2_04 PATH=...a_lot_of_paths...;%JAVA_HOME%\bin;%PATH% I had no idea which is better to set - for user or system settings. Done both. System restart. And...it didn't helped :( When I run "java -version" from cmd i have java 1.7, but not java 1.4 like defined in PATH. after run C:where java I got two results: C:\Windows\System32\java.exe C:\j2sdk1.4.2_04\bin\java.exe Who let Java go to my windows directory ???!!! How to deal with that?

    Read the article

  • Windows 7 versus Windows XP multithreading - Delphi app not acting right

    - by Robert Oschler
    I'm having a problem with a Delphi Pro 6 application that I wrote on my Windows XP machine when it runs on Windows 7. I don't have Windows 7 to test yet and I'm trying to see if Windows 7 might be the source of the trouble. Is there a fundamental difference between the way Windows 7 handles threads compared to Windows XP? I am seeing things happen out of sequence in my error logs on Windows 7 and it's causing problems. For example, objects that should have been initialized are uninitialized when running on Windows 7, yet those objects are initialized on Windows XP by the time they are needed. Some questions: 1) Are there any core differences that could cause threads/processes to behave differently between the two operating system versions? 2) I know this next question may seem absurd, but does Windows 7 attempt to split/fork threads that aren't split/forked on Windows XP? 3) And lastly, are there any known issues with FPU handling that can cause XP programs trouble when run on Windows 7 due to operational differences in wait state handling or register storage, or perhaps something like Exception mask settings, etc? 4) Any 32-bit versus 64-bit issues that could be creating trouble here? -- roschler

    Read the article

  • Project builds skipped with Any CPU build platform

    - by JMarsch
    All: We are using Visual Studio 2010, and we have recently upgraded our workstations to Windows 7/64-bit. I have a question: When I create a new solution, it seems to want to use the x86 platform. If I change the solution to "any cpu" and then I add a new project to the solution, the project will not have an "any cpu" build option, and it will be deselected from building (in configuration manager). Something seems wrong here. Here's what I want to have (assuming that it is supported): I want my solutions' platforms to default to "Any CPU" (I believe that means that at JIT time, the assembly will be either x86 or 64-bit, based on the machine that loaded it). When I add a new project to the solution, I want for it to have an "any cpu" solution, and I want for that projec to build by default. (basically, the same behavior that we had in VS 2008 on 32-bit workstations). How do I do that? Is there some additional thing that I need to know now that I am using a 64-bit workstation?

    Read the article

  • WCF client binding configuration in program code

    - by smarsha
    I have the following class that configures security, encoding, and token parameters but I am having trouble adding a BasicHttpBinding to specify a MaxReceivedMessageSize. Any insight would be appreciated. public class MultiAuthenticationFactorBinding { public static Binding CreateMultiFactorAuthenticationBinding() { HttpsTransportBindingElement httpTransport = new HttpsTransportBindingElement(); CustomBinding binding = new CustomBinding(); binding.Name = "myCustomBinding"; TransportSecurityBindingElement messageSecurity = TransportSecurityBindingElement.CreateUserNameOverTransportBindingElement(); messageSecurity.AllowInsecureTransport = true; messageSecurity.EnableUnsecuredResponse = true; messageSecurity.MessageSecurityVersion = MessageSecurityVersion.WSSecurity11WSTrust13WSSecureConversation13WSSecurityPolicy12; messageSecurity.SecurityHeaderLayout = SecurityHeaderLayout.Strict; messageSecurity.IncludeTimestamp = true; messageSecurity.SetKeyDerivation(false); TextMessageEncodingBindingElement Quota = new TextMessageEncodingBindingElement(MessageVersion.Soap11, System.Text.Encoding.UTF8); Quota.ReaderQuotas.MaxDepth = 32; Quota.ReaderQuotas.MaxStringContentLength = Int32.MaxValue; Quota.ReaderQuotas.MaxArrayLength = 16384; Quota.ReaderQuotas.MaxBytesPerRead = 4096; Quota.ReaderQuotas.MaxNameTableCharCount = 16384; X509SecurityTokenParameters clientX509SupportingTokenParameters = new X509SecurityTokenParameters(); clientX509SupportingTokenParameters.InclusionMode = SecurityTokenInclusionMode.AlwaysToRecipient; clientX509SupportingTokenParameters.RequireDerivedKeys = false; messageSecurity.EndpointSupportingTokenParameters.Endorsing.Add(clientX509SupportingTokenParameters); //binding.ReceiveTimeout = new TimeSpan(0,0,300); binding.Elements.Add(Quota); binding.Elements.Add(messageSecurity); binding.Elements.Add(httpTransport); return binding; } }

    Read the article

  • Cannot install windows service

    - by Matthew Dalton
    I have created a very simple window service using visual studio 2010 and .Net 4.0. This service has no functionality added from the default windows service project, other than an installer has been added. If i run installutil.exe appName.exe on my dev box or other windows 2008 R2 machines in our domain the windows service installs without issue. When i try to do this same thing on our customer site, it fails to install with the following error. Microsoft (R) .NET Framework Installation utility Version 4.0.30319.1 Copyright (c) Microsoft Corporation. All rights reserved. Exception occurred while initializing the installation: System.IO.FileLoadException: Could not load file or assembly 'file:///C:\TestService\WindowsService1.exe' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515). This solution has only 1 project and no dependencies added. I have tried it on multiple machines in our environment and two in our customers. The machines are all windows 2008 R2, both fresh installs. One machine has just .net 2.0 and .net 4.0. The other .net 2, 3, 3.5 and 4. I am a local admin on each of the machines. I have also tried the 64bit installer but get the following error, so i think the 32 bit one is the one to use. System.BadImageFormatException Any guidance would be appreciated. Thanks.

    Read the article

  • How to maintain long-lived python projects w.r.t. dependencies and python versions ?

    - by Gyom
    short version: how can I get rid of the multiple-versions-of-python nightmare ? long version: over the years, I've used several versions of python, and what is worse, several extensions to python (e.g. pygame, pylab, wxPython...). Each time it was on a different setup, with different OSes, sometimes different architectures (like my old PowerPC mac). Nowadays I'm using a mac (OSX 10.6 on x86-64) and it's a dependency nightmare each time I want to revive script older than a few months. Python itself already comes in three different flavours in /usr/bin (2.5, 2.6, 3.1), but I had to install 2.4 from macports for pygame, something else (cannot remember what) forced me to install all three others from macports as well, so at the end of the day I'm the happy owner of seven (!) instances of python on my system. But that's not the problem, the problem is, none of them has the right (i.e. same set of) libraries installed, some of them are 32bits, some 64bits, and now I'm pretty much lost. For example right now I'm trying to run a three-year-old script (not written by me) which used to use matplotlib/numpy to draw a real-time plot within a rectangle of a wxwidgets window. But I'm failing miserably: py26-wxpython from macports won't install, stock python has wxwidgets included but also has some conflict between 32 bits and 64 bits, and it doesn't have numpy... what a mess ! Obviously, I'm doing things the wrong way. How do you usally cope with all that chaos ?

    Read the article

  • Google Maps - custom icons with infoWindows

    - by hfidgen
    Hiya, As far as I can tell, this code is fine, and should display some custom icons with popup HTML windows. But the popups aren't working! Can anyone point out what I'm doing wrong? I can't seem to debug it myself. Thanks! function initialize() { if (GBrowserIsCompatible()) { var map = new GMap2(document.getElementById("map")); map.setCenter(new GLatLng(51.410416, -0.293884), 15); map.addControl(new GSmallMapControl()); map.addControl(new GMapTypeControl()); var i_parking = new GIcon(); i_parking.image = "http://google-maps-icons.googlecode.com/files/parking.png"; i_parking.iconSize = new GSize(32, 37); i_parking.iconAnchor = new GPoint(16, 37); icon_parking = { icon:i_parking }; var marker_office = new GMarker(new GLatLng(51.410416,-0.293884)); var marker_parking1 = new GMarker((new GLatLng(51.410178,-0.292000)),icon_parking); var marker_parking2 = new GMarker((new GLatLng(51.410152,-0.298948)),icon_parking); marker_parking1.openInfoWindowHtml('<strong>On Street Parking</strong><br>Church Road - 40p per hour'); marker_parking2.openInfoWindowHtml('<strong>Multi Storey - Fairfield</strong><br>Upper Car Park - 90p per half hour<br>Lower Car Park - £1.20 per hour'); map.addOverlay(marker_office); map.addOverlay(marker_parking1); map.addOverlay(marker_parking2); } }

    Read the article

  • Reasonably faster way to traverse a directory tree in Python?

    - by Sridhar Ratnakumar
    Assuming that the given directory tree is of reasonable size: say an open source project like Twisted or Python, what is the fastest way to traverse and iterate over the absolute path of all files/directories inside that directory? I want to do this from within Python (subprocess is allowed). os.path.walk is slow. So I tried ls -lR and tree -fi. For a project with about 8337 files (including tmp, pyc, test, .svn files): $ time tree -fi > /dev/null real 0m0.170s user 0m0.044s sys 0m0.123s $ time ls -lR > /dev/null real 0m0.292s user 0m0.138s sys 0m0.152s $ time find . > /dev/null real 0m0.074s user 0m0.017s sys 0m0.056s $ tree appears to be faster than ls -lR (though ls -R is faster than tree, but it does not give full paths). find is the fastest. Can anyone think of a faster and/or better approach? On Windows, I may simply ship a 32-bit binary tree.exe or ls.exe if necessary. Update 1: Added find

    Read the article

  • Sonata Media Bundle sortBy created_at

    - by tony908
    I use SonataMediaBundle and i would like sort Gallery by created_at field. In repository class i have (without orderBy working good!): $qb = $this->createQueryBuilder('m') ->orderBy('j.expires_at', 'DESC'); $query = $qb->getQuery(); return $query->getResult(); and this throw error: An exception has been thrown during the rendering of a template ("[Semantical Error] line 0, col 80 near 'created_at D': Error: Class Application\Sonata\MediaBundle\Entity\Gallery has no field or association named created_at") so i add this field to Gallery class: /** * @var \DateTime */ private $created_at; /** * Set created_at * * @param \DateTime $createdAt * @return Slider */ public function setCreatedAt($createdAt) { $this->created_at = $createdAt; return $this; } /** * Get created_at * * @return \DateTime */ public function getCreatedAt() { return $this->created_at; } but now i have error: FatalErrorException: Compile Error: Declaration of Application\Sonata\MediaBundle\Entity\Gallery::setCreatedAt() must be compatible with Sonata\MediaBundle\Model\GalleryInterface::setCreatedAt(DateTime $createdAt = NULL) in /home/tony/www/test/Application/Sonata/MediaBundle/Entity/Gallery.php line 32 GalleryInterface: https://github.com/sonata-project/SonataMediaBundle/blob/master/Model/GalleryInterface.php So... how can i use sortBy in my example?

    Read the article

  • initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.0 (--configure)

    - by mazgalici
    2 not fully installed or removed. Need to get 0B of archives. After unpacking 0B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up mysql-server-5.0 (5.0.32-7etch12) ... Stopping MySQL database server: mysqld. Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed! invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Modify this code to read bytes in the reverse endian?

    - by ibiza
    Hi, I have this bit of code which reads an 8 bytes array and converts it to a int64. I would like to know how to tweak this code so it would work when receiving data represented with the reverse endian... protected static long getLong(byte[] b, int off) { return ((b[off + 7] & 0xFFL) >> 0) + ((b[off + 6] & 0xFFL) << 8) + ((b[off + 5] & 0xFFL) << 16) + ((b[off + 4] & 0xFFL) << 24) + ((b[off + 3] & 0xFFL) << 32) + ((b[off + 2] & 0xFFL) << 40) + ((b[off + 1] & 0xFFL) << 48) + (((long) b[off + 0]) << 56); } Thanks for the help!

    Read the article

  • iOS Simulator is black on app execution

    - by Terryn
    I am running xcode 5.1.1 and have been trying to learn objective C / iOS development. Right now whenever I try to run my code on the emulator (I do not have an actual device atm) it comes up with a black screen. Code can be found here. Compiling and running gives me the following error: 2014-08-23 10:42:57.429 Calculator[1862:60b] *** Terminating app due to uncaught exception 'NSUnknownKeyException', reason: '[<XYZViewController 0xe436640> setValue:forUndefinedKey:]: this class is not key value coding-compliant for the key didgetPressed.' *** First throw call stack: ( 0 CoreFoundation 0x017ed1e4 __exceptionPreprocess + 180 1 libobjc.A.dylib 0x0156c8e5 objc_exception_throw + 44 2 CoreFoundation 0x0187cfe1 -[NSException raise] + 17 3 Foundation 0x0122cd9e -[NSObject(NSKeyValueCoding) setValue:forUndefinedKey:] + 282 4 Foundation 0x011991d7 _NSSetUsingKeyValueSetter + 88 5 Foundation 0x01198731 -[NSObject(NSKeyValueCoding) setValue:forKey:] + 267 6 Foundation 0x011fab0a -[NSObject(NSKeyValueCoding) setValue:forKeyPath:] + 412 7 UIKit 0x004e31f4 -[UIRuntimeOutletConnection connect] + 106 8 libobjc.A.dylib 0x0157e7de -[NSObject performSelector:] + 62 9 CoreFoundation 0x017e876a -[NSArray makeObjectsPerformSelector:] + 314 10 UIKit 0x004e1d4d -[UINib instantiateWithOwner:options:] + 1417 11 UIKit 0x0034a6f5 -[UIViewController _loadViewFromNibNamed:bundle:] + 280 12 UIKit 0x0034ae9d -[UIViewController loadView] + 302 13 UIKit 0x0034b0d3 -[UIViewController loadViewIfRequired] + 78 14 UIKit 0x0034b5d9 -[UIViewController view] + 35 15 UIKit 0x0026b267 -[UIWindow addRootViewControllerViewIfPossible] + 66 16 UIKit 0x0026b5ef -[UIWindow _setHidden:forced:] + 312 17 UIKit 0x0026b86b -[UIWindow _orderFrontWithoutMakingKey] + 49 18 UIKit 0x002763c8 -[UIWindow makeKeyAndVisible] + 65 19 UIKit 0x00226bc0 -[UIApplication _callInitializationDelegatesForURL:payload:suspended:] + 2097 20 UIKit 0x0022b667 -[UIApplication _runWithURL:payload:launchOrientation:statusBarStyle:statusBarHidden:] + 824 21 UIKit 0x0023ff92 -[UIApplication handleEvent:withNewEvent:] + 3517 22 UIKit 0x00240555 -[UIApplication sendEvent:] + 85 23 UIKit 0x0022d250 _UIApplicationHandleEvent + 683 24 GraphicsServices 0x037e2f02 _PurpleEventCallback + 776 25 GraphicsServices 0x037e2a0d PurpleEventCallback + 46 26 CoreFoundation 0x01768ca5 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 53 27 CoreFoundation 0x017689db __CFRunLoopDoSource1 + 523 28 CoreFoundation 0x0179368c __CFRunLoopRun + 2156 29 CoreFoundation 0x017929d3 CFRunLoopRunSpecific + 467 30 CoreFoundation 0x017927eb CFRunLoopRunInMode + 123 31 UIKit 0x0022ad9c -[UIApplication _run] + 840 32 UIKit 0x0022cf9b UIApplicationMain + 1225 33 Calculator 0x00002c8d main + 141 34 libdyld.dylib 0x01e34701 start + 1 35 ??? 0x00000001 0x0 + 1 ) libc++abi.dylib: terminating with uncaught exception of type NSException (lldb) I have attempted the following things, and so far nothing has worked 1) checked the deployment info, it has the main interface set to main 2) double checked all break points to turn them off and disabled all of them through Debug-Disable Breakpoints 3) Reset Content and Settings on the iOS Simulator. 4) Checked the issue with the LLDB Debugger, however from what I have read the issue is no longer present with the 5.1.1 xcode. 5) checked the local host is still set to 127.0.0.1 Thanks! - Terryn

    Read the article

  • How do the operators < and > work with pointers?

    - by Øystein
    Just for fun, I had a std::list of const char*, each element pointing to a null-terminated text string, and ran a std::list::sort() on it. As it happens, it sort of (no pun intended) did not sort the strings. Considering that it was working on pointers, that makes sense. According to the documentation of std::list::sort(), it (by default) uses the operator < between the elements to compare. Forgetting about the list for a moment, my actual question is: How do these (, <, =, <=) operators work on pointers in C++ and C? Do they simply compare the actual memory addresses? char* p1 = (char*) 0xDAB0BC47; char* p2 = (char*) 0xBABEC475; e.g. on a 32-bit, little-endian system, p1 p2 because 0xDAB0BC47 0xBABEC475? Testing seems to confirm this, but I thought it'd be good to put it on StackOverflow for future reference. C and C++ both do some weird things to pointers, so you never really know...

    Read the article

  • How to send a future email using AT command.

    - by BHare
    I just need to send one email into the future, so I figured i'd be best at using at rather than using cron. This is what I have so far, its messy and ugly and not that great at escaping: <pre> <?php $out = array(); // Where is the email going? $email = "[email protected]"; // What is the body of the email (make sure to escape any double-quotes) $body = "This is what is actually emailed to me"; $body = escapeshellcmd($body); $body = str_replace('!', '\!', $body); // What is the subject of the email (make sure to escape any double-quotes) $subject = "It's alive!"; $subject = escapeshellcmd($subject); $subject = str_replace('!', '\!', $subject); // How long from now should this email be sent? IE: 1 minute, 32 days, 1 month 2 days. $when = "1 minute"; $command= <<<END echo " echo \"$body\" > /tmp/email; mail -s \"$subject\" $email < /tmp/email; rm /tmp/email; " | at now + $when; END; $ret = exec($command, $out); print_r($out); ?> </pre> The output should be something like warning: commands will be executed using /bin/sh job 60 at Thu Dec 30 19:39:00 2010 However I am doing something wrong with exec and not getting the result? The main thing is this seem very messy. Is there any alternative better methods for doing this? PS: I had to add apache's user (www-data for me) to /etc/at.allow ...Which I don't like, but I can live with it.

    Read the article

  • WCF Multiple Service Configuration Issue

    - by Goober
    Scenario Ignoring that fact that some of the settings might be wrong and inconsistent. Why does the program fail to compile when I try and put these 2 separate configurations for WCF Services into the same APP.CONFIG file? One was writen by myself and another by a friend, yet I cannot get the application to compile. What have I missed? ERROR Type Initialization Exception CODE <configuration> <system.serviceModel> <!--START Service 1 CONFIGURATION--> <bindings> <netTcpBinding> <binding name="tcpServiceEndPoint" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:05:00" enabled="true" /> <security mode="None"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint address="" binding="netTcpBinding" bindingConfiguration="tcpServiceEndPoint" contract="ListenerService.IListenerService" name="tcpServiceEndPoint" /> </client> <!--END Service 1 CONFIGURATION-->

    Read the article

  • Calling into a saved java object via JNI from a different thread

    - by Drake Amara
    I have a java object which calls into a C++ shared object via JNI. In C++, I am saving a reference to the JNIEnv and jObject. JavaVM * jvm; JNIEnv * myEnv; jobject myobj; JNIEXPORT void JNICALL Java_org_api_init (JNIEnv *env, jobject jObj) { myEnv = env; myobj = jObj; } I also have a GLSurface renderer and it eventually calls the C++ shared object mentioned above on a different thread, the GLThread. I am then trying to call back into my original Java object using the jobject I saved initially, but I think because I am on the GLThread, I get the following error. W/dalvikvm(16101): JNI WARNING: 0x41ded218 is not a valid JNI reference I/dalvikvm(16101): "GLThread 981" prio=5 tid=15 RUNNABLE I/dalvikvm(16101): | group="main" sCount=0 dsCount=0 obj=0x41d6e220 self=0x5cb11078 I/dalvikvm(16101): | sysTid=16133 nice=0 sched=0/0 cgrp=apps handle=1555429136 I/dalvikvm(16101): | schedstat=( 0 0 0 ) utm=42 stm=32 core=1 The code calling back into Java : void setData() { jvm->AttachCurrentThread(&myEnv, 0); jclass javaClass = myEnv->FindClass("com/myapp/myClass"); if(javaClass == NULL){ LOGD("ERROR - cant find class"); } jmethodID method = myEnv->GetMethodID(javaClass, "updateDataModel", "()V"); if(method == NULL){ LOGD("ERROR - cant access method"); } // this works, but its a new java object //jobject myobj2 = myEnv->NewObject(javaClass, method); //this is where the crash occurs myEnv->CallVoidMethod(myobj, method, NULL); } If instead I create a new jObject using env-NewObject, I can succuessfully call back into Java, but it is a new object and I dont want that. I need to get back to my original Java Object. Is it a matter of switching threads before I call back into Java? If so, how do I do so ?

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >