Search Results

Search found 18173 results on 727 pages for 'null radix'.

Page 92/727 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Dangling pointers and double free

    - by user151410
    After some painful experiences, I understand the problem of dangling pointers and double free. I am seeking proper solutions. aStruct has a number of fields including other arrays. aStruct *A=NULL, *B = NULL; A = (aStruct*) calloc(1, sizeof(sStruct)); B = A; free_aStruct(A); ... //bunch of other code in various places. ... free_aStruct(B); Is there any way to write free_aStruct(X) so that free_aStruct(B) exists gracefully?? void free_aStruct(aStruct *X){ if (X ! = NULL){ if (X->a != NULL){free(X->a); x->a = NULL;} free(X); X = NULL; } } Doing above only sets A = NULL when free_aStruct(A); is called. B is now dangling. How can this situation be avoided / remedied? Is reference counting the only viable solution? or, are there other "defensive" free approaches, to prevent free_aStruct(B); from exploding? Thanks, Russ

    Read the article

  • Setting a profile property in asp.net MVC while creating a user.

    - by twal
    In my application, an authorized user will Register a new customer and set their username and password, etc. The customer will not be registering them self. I have the following code in a class file which creates the new user, and it works just fine. public MembershipCreateStatus CreateUser(string userName, string password, string email, string employeeID) { if (String.IsNullOrEmpty(userName)) throw new ArgumentException("Value cannot be null or empty.", "userName"); if (String.IsNullOrEmpty(password)) throw new ArgumentException("Value cannot be null or empty.", "password"); if (String.IsNullOrEmpty(email)) throw new ArgumentException("Value cannot be null or empty.", "email"); MembershipCreateStatus status; /// _membershipProvider.CreateUser("asdas","ds123BB", email, "asdasd", "asdasdad", true, null, out status); _membershipProvider.CreateUser(userName, password, email, null, null, true, null, out status); string answer = status.ToString(); return status; } However I would also like to set an employeeID property for the profile when creating this user. how would I set the EmployeeID when creating a new user?? Thanks

    Read the article

  • Creating a [materialised]view from generic data in Oracle/Mysql

    - by Andrew White
    I have a generic datamodel with 3 tables CREATE TABLE Properties ( propertyId int(11) NOT NULL AUTO_INCREMENT, name varchar(80) NOT NULL ) CREATE TABLE Customers ( customerId int(11) NOT NULL AUTO_INCREMENT, customerName varchar(80) NOT NULL ) CREATE TABLE PropertyValues ( propertyId int(11) NOT NULL, customerId int(11) NOT NULL, value varchar(80) NOT NULL ) INSERT INTO Properties VALUES (1, 'Age'); INSERT INTO Properties VALUES (2, 'Weight'); INSERT INTO Customers VALUES (1, 'Bob'); INSERT INTO Customers VALUES (2, 'Tom'); INSERT INTO PropertyValues VALUES (1, 1, '34'); INSERT INTO PropertyValues VALUES (2, 1, '80KG'); INSERT INTO PropertyValues VALUES (1, 2, '24'); INSERT INTO PropertyValues VALUES (2, 2, '53KG'); What I would like to do is create a view that has as columns all the ROWS in Properties and has as rows the entries in Customers. The column values are populated from PropertyValues. e.g. customerId Age Weight 1 34 80KG 2 24 53KG I'm thinking I need a stored procedure to do this and perhaps a materialised view (the entries in the table "Properties" change rarely). Any tips?

    Read the article

  • How to read data from file(.dat) in append mode

    - by govardhan
    We have an application which requires us to read data from a file (.dat) dynamically using deserialization. We are actually getting first object and it throws null pointer exception and "java.io.StreamCorruptedException:invalid type code:AC" when we are accessing other objects using a "for" loop. File file=null; FileOutputStream fos=null; BufferedOutputStream bos=null; ObjectOutputStream oos=null; try{ file=new File("account4.dat"); fos=new FileOutputStream(file,true); bos=new BufferedOutputStream(fos); oos=new ObjectOutputStream(bos); oos.writeObject(m); System.out.println("object serialized"); amlist=new MemberAccountList(); oos.close(); } catch(Exception ex){ ex.printStackTrace(); } Reading objects: try{ MemberAccount m1; file=new File("account4.dat");//add your code here fis=new FileInputStream(file); bis=new BufferedInputStream(fis); ois=new ObjectInputStream(bis); System.out.println(ois.readObject()); **while(ois.readObject()!=null){ m1=(MemberAccount)ois.readObject(); System.out.println(m1.toString()); }/*mList.addElement(m1);** // Here we have the issue throwing null pointer exception Enumeration elist=mList.elements(); while(elist.hasMoreElements()){ obj=elist.nextElement(); System.out.println(obj.toString()); }*/ } catch(ClassNotFoundException e){ } catch(EOFException e){ System.out.println("end"); } catch(Exception ex){ ex.printStackTrace(); }

    Read the article

  • Optimize a MySQL count each duplicate Query

    - by Onema
    I have the following query That gets the city name, city id, the region name, and a count of duplicate names for that record: SELECT Country_CA.City AS currentCity, Country_CA.CityID, globe_region.region_name, ( SELECT count(Country_CA.City) FROM Country_CA WHERE City LIKE currentCity ) as counter FROM Country_CA LEFT JOIN globe_region ON globe_region.region_id = Country_CA.RegionID AND globe_region.country_code = Country_CA.CountryCode ORDER BY City This example is for Canada, and the cities will be displayed on a dropdown list. There are a few towns in Canada, and in other countries, that have the same names. Therefore I want to know if there is more than one town with the same name region name will be appended to the town name. Region names are found in the globe_region table. Country_CA and globe_region look similar to this (I have changed a few things for visualization purposes) CREATE TABLE IF NOT EXISTS `Country_CA` ( `City` varchar(75) NOT NULL DEFAULT '', `RegionID` varchar(10) NOT NULL DEFAULT '', `CountryCode` varchar(10) NOT NULL DEFAULT '', `CityID` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`City`,`RegionID`), KEY `CityID` (`CityID`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; AND CREATE TABLE IF NOT EXISTS `globe_region` ( `country_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_name` varchar(50) COLLATE utf8_unicode_ci NOT NULL, PRIMARY KEY (`country_code`,`region_code`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; The query on the top does exactly what I want it to do, but It takes way too long to generate a list for 5000 records. I would like to know if there is a way to optimize the sub-query in order to obtain the same results faster. the results should look like this City CityID region_name counter sheraton 2349269 British Columbia 1 sherbrooke 2349270 Quebec 2 sherbrooke 2349271 Nova Scotia 2 shere 2349273 British Columbia 1 sherridon 2349274 Manitoba 1

    Read the article

  • Use Javascript RegEx to extract column names from SQLite Create Table SQL

    - by NimbusSoftware
    I'm trying to extract column names from a SQLite result set from sqlite_master's sql column. I get hosed up in the regular expressions in the match() and split() functions. t1.executeSql('SELECT name, sql FROM sqlite_master WHERE type="table" and name!="__WebKitDatabaseInfoTable__";', [], function(t1, result) { for(i = 0;i < result.rows.length; i++){ var tbl = result.rows.item(i).name; var dbSchema = result.rows.item(i).sql; // errors out on next line var columns = dbSchema.match(/.*CREATE\s+TABLE\s+(\S+)\s+\((.*)\).*/)[2].split(/\s+[^,]+,?\s*/); } }, function(){console.log('err1');} ); I want to parse SQL statements like these... CREATE TABLE sqlite_sequence(name,seq); CREATE TABLE tblConfig (Key TEXT NOT NULL,Value TEXT NOT NULL); CREATE TABLE tblIcon (IconID INTEGER NOT NULL PRIMARY KEY,png TEXT NOT NULL,img32 TEXT NOT NULL,img64 TEXT NOT NULL,Version TEXT NOT NULL) into a strings like theses... name,seq Key,Value IconID,png,img32,img64,Version Any help with a RegEx would be greatly appreciated.

    Read the article

  • Saving tree-structures in Databases

    - by Nina Null
    Hello everyone. I use Hibernate/Spring and a MySQL Database for my data management. Currently I display a tree-structure in a JTable. A tree can have several branches, in turn a branch can have several branches (up to nine levels) again, or having leaves. Lately I have performanceproblemes, as soon as I want to create new branches on deeper levels. At this time a branch has a foreign key to its parent. The domainobject has access to its parent by calling getParent(), which returns the parent-branch. The deeper the level, the longer it takes to create a new branch. Microbenchmark results for creating a new branch are like: Level 1: 32 ms. Level 3: 80 ms. Level 9: 232 ms. Obviously the level (which means the number of parents) is responsible for this. So I wanted to ask, if there are any appendages to work around this kind of problem. I don’t understand why Hibernate needs to know about the whole object tree (all parents until the root) while creating a new branch. But as far as I know this can be the only reason for the delay while creating a new branch, because a branch doesn’t have any other relations to any other objects. I would be very thankful for any workarounds or suggestions. greets, jambusa

    Read the article

  • MySQL SELECT combining 3 SELECTs INTO 1

    - by Martin Tóth
    Consider following tables in MySQL database: entries: creator_id INT entry TEXT is_expired BOOL other: creator_id INT entry TEXT userdata: creator_id INT name VARCHAR etc... In entries and other, there can be multiple entries by 1 creator. userdata table is read only for me (placed in other database). I'd like to achieve a following SELECT result: +------------+---------+---------+-------+ | creator_id | entries | expired | other | +------------+---------+---------+-------+ | 10951 | 59 | 55 | 39 | | 70887 | 41 | 34 | 108 | | 88309 | 38 | 20 | 102 | | 94732 | 0 | 0 | 86 | ... where entries is equal to SELECT COUNT(entry) FROM entries GROUP BY creator_id, expired is equal to SELECT COUNT(entry) FROM entries WHERE is_expired = 0 GROUP BY creator_id and other is equal to SELECT COUNT(entry) FROM other GROUP BY creator_id. I need this structure because after doing this SELECT, I need to look for user data in the "userdata" table, which I planned to do with INNER JOIN and select desired columns. I solved this problem with selecting "NULL" into column which does not apply for given SELECT: SELECT creator_id, COUNT(any_entry) as entries, COUNT(expired_entry) as expired, COUNT(other_entry) as other FROM ( SELECT creator_id, entry AS any_entry, NULL AS expired_entry, NULL AS other_enry FROM entries UNION SELECT creator_id, NULL AS any_entry, entry AS expired_entry, NULL AS other_enry FROM entries WHERE is_expired = 1 UNION SELECT creator_id, NULL AS any_entry, NULL AS expired_entry, entry AS other_enry FROM other ) AS tTemp GROUP BY creator_id ORDER BY entries DESC, expired DESC, other DESC ; I've left out the INNER JOIN and selecting other columns from userdata table on purpose (my question being about combining 3 SELECTs into 1). Is my idea valid? = Am I trying to use the right "construction" for this? Are these kind of SELECTs possible without creating an "empty" column? (some kind of JOIN) Should I do it "outside the DB": make 3 SELECTs, make some order in it (let's say python lists/dicts) and then do the additional SELECTs for userdata? Solution for a similar question does not return rows where entries and expired are 0. Thank you for your time.

    Read the article

  • Matlab: Randomize and Split

    - by Null-Hypothesis
    I have following data matrix in Matlab, I am trying to actually split this into multiple segments by passing a variable to a matlab function. But before splitting I would like to shuffle the matrix. The size of my matrix is 150X4 s.data 5.1000 3.5000 1.4000 0.2000 4.9000 3.0000 1.4000 0.2000 4.7000 3.2000 1.3000 0.2000 4.6000 3.1000 1.5000 0.2000 5.0000 3.6000 1.4000 0.2000 .. s = data: [150x4 double] labels: [150x1 double] Coming from R environment I find MatLab is very strange. Initially I thought the columns in matrix has a relationshop like in a R dataframe but thats wrong in my assumption.

    Read the article

  • grailsApplication access in Grails unit Test

    - by Reza
    I am trying to write unit tests for a service which use grailsApplication.config to do some settings. It seems that in my unit tests that service instance could not access the config file (null pointer) for its setting while it could access that setting when I run "run-app". How could I configure the service to access grailsApplication service in my unit tests. class MapCloudMediaServerControllerTests { def grailsApplication @Before public void setUp(){ grailsApplication.config= ''' video{ location="C:\\tmp\\" // or shared filesystem drive for a cluster yamdi{ path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\yamdi" } ffmpeg { fileExtension = "flv" // use flv or mp4 conversionArgs = "-b 600k -r 24 -ar 22050 -ab 96k" path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\ffmpeg" makethumb = "-an -ss 00:00:03 -an -r 2 -vframes 1 -y -f mjpeg" } ffprobe { path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\ffprobe" params="" } flowplayer { version = "3.1.2" } swfobject { version = "" qtfaststart { path= "C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\qtfaststart" } } ''' } @Test void testMpegtoFlvConvertor() { log.info "In test Mpg to Flv Convertor function!" def controller=new MapCloudMediaServerController() assert controller!=null controller.videoService=new VideoService() assert controller.videoService!=null log.info "Is the video service null? ${controller.videoService==null}" controller.videoService.grailsApplication=grailsApplication log.info "Is grailsApplication null? ${controller.videoService.grailsApplication==null}" //Very important part for simulating the HTTP request controller.metaClass.request = new MockMultipartHttpServletRequest() controller.request.contentType="video/mpg" controller.request.content= new File("..\\MapCloudMediaServer\\web-app\\videoclips\\sample3.mpg").getBytes() controller.mpegtoFlvConvertor() byte[] videoOut=IOUtils.toByteArray(controller.response.getOutputStream()) def outputFile=new File("..\\MapCloudMediaServer\\web-app\\videoclips\\testsample3.flv") outputFile.append(videoOut) } }

    Read the article

  • how to monitor the program code execution? (file creation and modification by code lines etc)

    - by infant programmer
    My program is about triggering XSL transformation, Its fact that this code for carrying out the transformation, creates some dll and tmp files and deletes them pretty soon after the transformation is completed. It is almost untraceable for me to monitor the creation and deletion of files manually, so I want to include some chunk of codelines to display "which codeline has created/modified which tmp and dll files" in console window. This is the relevant part of the code: string strXmlQueryTransformPath = @"input.xsl"; string strXmlOutput = string.Empty; StringReader srXmlInput = null; StringWriter swXmlOutput = null; XslCompiledTransform xslTransform = null; XPathDocument xpathXmlOrig = null; XsltSettings xslSettings = null; MemoryStream objMemoryStream = null; objMemoryStream = new MemoryStream(); xslTransform = new XslCompiledTransform(false); xpathXmlOrig = new XPathDocument("input.xml"); xslSettings = new XsltSettings(); xslSettings.EnableScript = true; xslTransform.Load(strXmlQueryTransformPath, xslSettings, new XmlUrlResolver()); xslTransform.Transform(xpathXmlOrig, null, objMemoryStream); objMemoryStream.Position = 0; StreamReader objStreamReader = new StreamReader(objMemoryStream); strXmlOutput = objStreamReader.ReadToEnd(); // make use of Data in string "strXmlOutput" google and msdn search couldn't help me much..

    Read the article

  • Incorrect component when querying immediately after insert using NHibernate

    - by Am
    I have the following mapping for my table in MySql: <class name="Tag, namespace" table="tags" > <id name="id" type="Int32" unsaved-value="0"> <generator class="native"></generator> </id> <property name="name" type="String" not-null="true"></property> <component name="record_dates" class="DateMetaData, namespace" > <property name="created_at" type="DateTime" not-null="true"></property> <property name="updated_at" type="DateTime" not-null="true"></property> </component> </class> As you see the record_dates property is defined as a component field of type DateMetaDate. Both created_at and updated_at fields in 'tags' table are updated via triggers. Thus I can insert a new record like such: var newTag = new Tag() { name = "some string here" } Int32 id = (Int32)Session.Save(tag); Session.Flush(); ITag t = Session.Get<Tag>(id); ViewData["xxx"] = t.name; // -----> not null ViewData["xxx"] = t.record_dates.created_at; // -----> is null However when querying the same record back immediately after it was inserted the record_dates field ends up null even though in the table those fields have got values. Can any one please point out why the Session.Get ignores getting everything back from the table? is it because it caches the newly created record for which the records_dates is null? If so how can it be told to ignore the cached version?

    Read the article

  • How to properly recreate BITMAP, that was previously shared by CreateFileMapping()?

    - by zim22
    Dear friends, I need your help. I need to send .bmp file to another process (dialog box) and display it there, using MMF(Memory Mapped File) But the problem is that image displays in reversed colors and upside down. In first application I open picture from HDD and link it to the named MMF "Gigabyte_picture" HANDLE hFile = CreateFile("123.bmp", GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, 0, NULL); CreateFileMapping(hFile, NULL, PAGE_READONLY, 0, 0, "Gigabyte_picture"); In second application I open mapped bmp file and at the end I display m_HBitmap on the static component, using SendMessage function. HANDLE hMappedFile = OpenFileMapping(FILE_MAP_READ, FALSE, "Gigabyte_picture"); PBYTE pbData = (PBYTE) MapViewOfFile(hMappedFile, FILE_MAP_READ, 0, 0, 0); BITMAPINFO bmpInfo = { 0 }; LONG lBmpSize = 60608; // size of the bmp file in bytes bmpInfo.bmiHeader.biBitCount = 32; bmpInfo.bmiHeader.biHeight = 174; bmpInfo.bmiHeader.biWidth = 87; bmpInfo.bmiHeader.biPlanes = 1; bmpInfo.bmiHeader.biSizeImage = lBmpSize; bmpInfo.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); UINT * pPixels = 0; HDC hDC = CreateCompatibleDC(NULL); HBITMAP m_HBitmap = CreateDIBSection(hDC, &bmpInfo, DIB_RGB_COLORS, (void **)& pPixels, NULL, 0); SetBitmapBits(m_HBitmap, lBmpSize, pbData); SendMessage(gStaticBox, STM_SETIMAGE, (WPARAM)IMAGE_BITMAP,(LPARAM)m_HBitmap); ///////////// HWND gStaticBox = CreateWindowEx(0, "STATIC","", SS_CENTERIMAGE | SS_REALSIZEIMAGE | SS_BITMAP | WS_CHILD | WS_VISIBLE, 10,10,380, 380, myDialog, (HMENU)-1,NULL,NULL);

    Read the article

  • Currently using View, Should I use a hard table instead?

    - by 1001010101
    I am currently debating whether my table, mapping_uGroups_uProducts, which is a view formed by the following table: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `db`.`mapping_uGroups_uProducts` AS select distinct `X`.`upID` AS `upID`,`Z`.`ugID` AS `ugID` from ((`db`.`mapping_uProducts_Products` `X` join `db`.`productsInfo` `Y` on((`X`.`pID` = `Y`.`pID`))) join `db`.`mapping_uGroups_Groups` `Z` on((`Y`.`gID` = `Z`.`gID`))); My current query is: SELECT upID FROM uProductsInfo \ JOIN fs_uProducts USING (upID) column \ JOIN mapping_uGroups_uProducts USING (upID) -- could be faster if we use hard table and index \ JOIN mapping_fs_key USING (fsKeyID) \ WHERE fsName="OVERALL" \ AND ugID=1 \ ORDER BY score DESC \ LIMIT 0,30; which is pretty slow. (for 30 results, it requires about 10 secondes). I think the reason for my query being so slow is definitely due to the fact that that particular query relies on a VIEW which has no index to speed things up. +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | 1 | PRIMARY | mapping_fs_key | const | PRIMARY,fsName | fsName | 386 | const | 1 | Using temporary; Using filesort | | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 19706 | Using where | | 1 | PRIMARY | uProductsInfo | eq_ref | PRIMARY | PRIMARY | 4 | mapping_uGroups_uProducts.upID | 1 | Using index | | 1 | PRIMARY | fs_uProducts | ref | upID | upID | 4 | db.uProductsInfo.upID | 221 | Using where | | 2 | DERIVED | X | ALL | PRIMARY | NULL | NULL | NULL | 40772 | Using temporary | | 2 | DERIVED | Y | eq_ref | PRIMARY | PRIMARY | 4 | db.X.pID | 1 | Distinct | | 2 | DERIVED | Z | ref | PRIMARY | PRIMARY | 4 | db.Y.gID | 2 | Using index; Distinct | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ 7 rows in set (0.48 sec) The explain here looks pretty cryptic, and I don't know whether I should drop view and write a script to just insert everything in the view to a hard table. ( obviously, it will lose the flexibility of the view since the mapping changes quite frequently). Does anyone have any idea to how I can optimize my schema better?

    Read the article

  • ERROR 2019 Linker Error Visual Studio

    - by Corrie Duck
    Hey I hope someone can tell me know to fix this issue I am having i keep getting an error 2019 from Visual studio for the following file. Now most of the functions have been removed so excuse the empty varriables etc. Error error LNK2019: unresolved external symbol "void * __cdecl OpenOneDevice(void *,struct _SP_DEVICE_INTERFACE_DATA *,char *)" (?OpenOneDevice@@YAPAXPAXPAU_SP_DEVICE_INTERFACE_DATA@@PAD@Z) referenced in function _wmain c:\Users\K\documents\visual studio 2010\Projects\test2\test2\test2.obj test2 #include "stdafx.h" #include <windows.h> #include <setupapi.h> SP_DEVICE_INTERFACE_DATA deviceInfoData; HDEVINFO hwDeviceInfo; HANDLE hOut; char *devName; // HANDLE OpenOneDevice(IN HDEVINFO hwDeviceInfo,IN PSP_DEVICE_INTERFACE_DATA DeviceInfoData,IN char *devName); // HANDLE OpenOneDevice(IN HDEVINFO HardwareDeviceInfo,IN PSP_DEVICE_INTERFACE_DATA DeviceInfoData,IN char *devName) { PSP_DEVICE_INTERFACE_DETAIL_DATA functionClassDeviceData = NULL; ULONG predictedLength = 0, requiredLength = 0; HANDLE hOut = INVALID_HANDLE_VALUE; SetupDiGetDeviceInterfaceDetail(HardwareDeviceInfo, DeviceInfoData, NULL, 0, &requiredLength, NULL); predictedLength = requiredLength; functionClassDeviceData = (PSP_DEVICE_INTERFACE_DETAIL_DATA)malloc(predictedLength); if(NULL == functionClassDeviceData) { return hOut; } functionClassDeviceData->cbSize = sizeof (SP_DEVICE_INTERFACE_DETAIL_DATA); if (!SetupDiGetDeviceInterfaceDetail(HardwareDeviceInfo, DeviceInfoData, functionClassDeviceData, predictedLength, &requiredLength, NULL)) { free( functionClassDeviceData ); return hOut; } //strcpy(devName,functionClassDeviceData->DevicePath) ; hOut = CreateFile(functionClassDeviceData->DevicePath, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, 0, NULL); free(functionClassDeviceData); return hOut; } // int _tmain(int argc, _TCHAR* argv[]) { hOut = OpenOneDevice (hwDeviceInfo, &deviceInfoData, devName); if(hOut != INVALID_HANDLE_VALUE) { // error report } return 0; } Been driving me mad for hours. Any help appreciated. SOLVED THANKS TO CHRIS :-) Add #pragma comment (lib, "Setupapi.lib") Thanks

    Read the article

  • List of Big-O for PHP functions?

    - by Kendall Hopkins
    After using PHP for a while now, I've noticed that not all PHP built in functions as fast as expected. Consider the below two possible implementations of a function that finds if a number is prime using a cached array of primes. //very slow for large $prime_array $prime_array = array( 2, 3, 5, 7, 11, 13, .... 104729, ... ); $result_array = array(); foreach( $array_of_number => $number ) { $result_array[$number] = in_array( $number, $large_prime_array ); } //still decent performance for large $prime_array $prime_array => array( 2 => NULL, 3 => NULL, 5 => NULL, 7 => NULL, 11 => NULL, 13 => NULL, .... 104729 => NULL, ... ); foreach( $array_of_number => $number ) { $result_array[$number] = array_key_exists( $number, $large_prime_array ); } This is because in_array is implemented with a linear search O(n) which will linearly slow down as $prime_array grows. Where the array_key_exists function is implemented with a hash lookup O(1) which will not slow down unless the hash table gets extremely populated (in which case it's only O(logn)). So far I've had to discover the big-O's via trial and error, and occasionally looking at the source code. Now for the question... I was wondering if there was a list of the theoretical (or practical) big O times for all* the PHP built in functions. *or at least the interesting ones For example find it very hard to predict what the big O of functions listed because the possible implementation depends on unknown core data structures of PHP: array_merge, array_merge_recursive, array_reverse, array_intersect, array_combine, str_replace (with array inputs), etc.

    Read the article

  • Android: Problems downloading images and converting to bitmaps

    - by Mike
    Hi all, I am working on an application that downloads images from a url. The problem is that only some images are being correctly downloaded and others are not. First off, here is the problem code: public Bitmap downloadImage(String url) { HttpClient client = new DefaultHttpClient(); HttpResponse response = null; try { response = client.execute(new HttpGet(url)); } catch (ClientProtocolException cpe) { Log.i(LOG_FILE, "client protocol exception"); return null; } catch (IOException ioe) { Log.i(LOG_FILE, "IOE downloading image"); return null; } catch (Exception e) { Log.i(LOG_FILE, "Other exception downloading image"); return null; } // Convert images from stream to bitmap object try { Bitmap image = BitmapFactory.decodeStream(response.getEntity().getContent()); if(image==null) Log.i(LOG_FILE, "image conversion failed"); return image; } catch (Exception e) { Log.i(LOG_FILE, "Other exception while converting image"); return null; } } So what I have is a method that takes the url as a string argument and then downloads the image, converts the HttpResponse stream to a bitmap by means of the BitmapFactory.decodeStream method, and returns it. The problem is that when I am on a slow network connection (almost always 3G rather than Wi-Fi) some images are converted to null--not all of them, only some of them. Using a Wi-Fi connection works perfectly; all the images are downloaded and converted properly. Does anyone know why this is happening? Or better, how can I fix this? How would I even go about testing to determine the problem? Any help is awesome; thank you!

    Read the article

  • Python/Django Concatenate a string depending on whether that string exists

    - by Douglas Meehan
    I'm creating a property on a Django model called "address". I want address to consist of the concatenation of a number of fields I have on my model. The problem is that not all instances of this model will have values for all of these fields. So, I want to concatenate only those fields that have values. What is the best/most Pythonic way to do this? Here are the relevant fields from the model: house = models.IntegerField('House Number', null=True, blank=True) suf = models.CharField('House Number Suffix', max_length=1, null=True, blank=True) unit = models.CharField('Address Unit', max_length=7, null=True, blank=True) stex = models.IntegerField('Address Extention', null=True, blank=True) stdir = models.CharField('Street Direction', max_length=254, null=True, blank=True) stnam = models.CharField('Street Name', max_length=30, null=True, blank=True) stdes = models.CharField('Street Designation', max_length=3, null=True, blank=True) stdessuf = models.CharField('Street Designation Suffix',max_length=1, null=True, blank=True) I could just do something like this: def _get_address(self): return "%s %s %s %s %s %s %s %s" % (self.house, self.suf, self.unit, self.stex, self.stdir, self.stname, self.stdes, self.stdessuf) but then there would be extra blank spaces in the result. I could do a series of if statements and concatenate within each, but that seems ugly. What's the best way to handle this situation? Thanks.

    Read the article

  • Storing data in a MySQL database using MySQL & PHP

    - by comma
    I'm new to PHP and MySQL and I'm trying to store a users entered data from the following fields $skill, $experience, $years which a user can also add additional fields of $skill, $experience, $years if needed so in instead of 1 of each field there might be multiples of each field. I was wondering how can I store the fields in my MySQL database using PHP and MySQL? I have the following script but I know its wrong. can some one help me fix the script listed below? Here is the PHP and MySQL code. $skill = serialize($_POST['skill']); $experience = serialize($_POST['experience']); $years = serialize($_POST['years']); for (($s = 0; $s < count($skill); $s++) && ($x = 0; $x < count($experience); $x++) && ($g = 0; $g < count($years); $g++)){ $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $query1 = "INSERT INTO learned_skills (skill, experience, years) VALUES ('" . $skill[$s] . "', '" . $experience[$x] . "', '" . $years[$g] . "')"; if (!mysqli_query($mysqli, $query1)) { print mysqli_error($mysqli); return; } } Here is my MySQL table. CREATE TABLE learned_skills ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, skill TEXT NOT NULL, experience TEXT NOT NULL, years INT NOT NULL, PRIMARY KEY (id) ); CREATE TABLE u_skills ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, skill_id INT UNSIGNED NOT NULL, users_id INT UNSIGNED NOT NULL, PRIMARY KEY (id) );

    Read the article

  • How do you search through a map?

    - by Jack Null
    I have a map: Map<String, String> ht = new HashMap(); and I would like to know how to search through it and find anything matching a particular string. And if it is a match store it into an arraylist. The map contains strings like this: 1,2,3,4,5,5,5 and the matching string would be 5. So for I have this: String match = "5"; ArrayList<String> result = new ArrayList<String>(); Enumeration num= ht.keys(); while (num.hasMoreElements()) { String number = (String) num.nextElement(); if(number.equals(match)) { result.add(number); } }

    Read the article

  • How to backup using backup API's in c++

    - by user1603185
    I am writing an application that used to backup some specified file, therefore using the backup API calls i.e CreateFile BackupRead and WriteFile API's. getting errors Access violation reading location. I have attached code below. #include <windows.h> int main() { HANDLE hInput, hOutput; //m_filename is a variable holding the file path to read from hInput = CreateFile(L"C:\\Key.txt", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, NULL); //strLocation contains the path of the file I want to create. hOutput= CreateFile(L"C:\\tmp\\", GENERIC_WRITE, NULL, NULL, CREATE_ALWAYS, NULL, NULL); DWORD dwBytesToRead = 1024 * 1024 * 10; BYTE *buffer; buffer = new BYTE[dwBytesToRead]; BOOL bReadSuccess = false,bWriteSuccess = false; DWORD dwBytesRead,dwBytesWritten; LPVOID lpContext; //Now comes the important bit: do { bReadSuccess = BackupRead(hInput, buffer, sizeof(BYTE) *dwBytesToRead, &dwBytesRead, false, true, &lpContext); bWriteSuccess= WriteFile(hOutput, buffer, sizeof(BYTE) *dwBytesRead, &dwBytesWritten, NULL); }while(dwBytesRead == dwBytesToRead); return 0; } Any one suggest me how to use these API's? Thanks.

    Read the article

  • "Invalid assignment" error from == operator

    - by Tom
    I was trying to write a simple method: boolean validate(MyObject o) { // propertyA && propertyB are not primitive types. return o.getPropertyA() == null && o.getPropertyB() == null; } And got a strange error on the == null part: Syntax error on token ==. Invalid assignment operator. Maybe my Java is rusty after a season in PLSQL. So I tried a simpler example: Integer i = 4; i == null; // compile error: Syntax error on token ==. Invalid assignment operator. Integer i2 = 4; if (i == null); //No problem How can this be? I'm using jdk160_05. To clarify: I'm not trying to assign anything, just do an && operation between two boolean values. I don't want to do this: if (o.propertyA() == null && o.propertyB() == null) { return true; } else { return false; }

    Read the article

  • a problem with parallel.foreach in initializing conversation manager

    - by Adrakadabra
    i use mvc2, nhibernate 2.1.2 in controller class i call foreachParty method like this: OrganizationStructureService.ForEachParty<Department>(department, null, p => { p.AddParentWithoutRemovingExistentAccountability(domainDepartment, AccountabilityTypeDbId.SupervisionDepartmentOfDepartment); } }, x => (!(x.AccountabilityType.Id == (int)AccountabilityTypeDbId.SupervisionDepartmentOfDepartment))); static public void ForEachParty(Party party, PartyTypeDbId? partyType, Action action, Expression expression = null) where T : Party { IList chilrden = new List(); IList acc = party.Children; if (party != null) action(party); if (partyType != null) acc = acc.Where(p => p.Child.PartyTypes.Any(c => c.Id == (int)partyType)).ToList(); if (expression != null) acc = acc.AsQueryable().Where(expression).ToList(); Parallel.ForEach(acc, p => { if (partyType == null) ForEachParty<T>(p.Child, null, action); else ForEachParty<T>(p.Child, partyType, action); }); } but just after executing the action on foreach.parallel, i dont know why the conversation is getting closed and i see "current conversation is not initilized yet or its closed"

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >