Search Results

Search found 10033 results on 402 pages for 'fuzz testing'.

Page 368/402 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • Closure Tables - Is this enough data to display a tree view?

    - by James Pitt
    Here is the table I have created by testing the closure table method. | id | parentId | childId | hops | | | | | 270 | 6 | 6 | 0 | 271 | 7 | 7 | 0 | 272 | 8 | 8 | 0 | 273 | 9 | 9 | 0 | 276 | 10 | 10 | 0 | 281 | 9 | 10 | 1 | 282 | 7 | 9 | 1 | 283 | 7 | 10 | 2 | 285 | 7 | 8 | 1 | 286 | 6 | 7 | 1 | 287 | 6 | 9 | 2 | 288 | 6 | 10 | 3 | 289 | 6 | 8 | 2 | 293 | 6 | 9 | 1 | 294 | 6 | 10 | 2 I am trying to create a simple tree of this using PHP. There does not seem to be enough data to create the table. For example, when I look purely at parentId = 6: -Part 6 -Part 7 - ? - ? -Part 9 - ? - ? We know that parts 8 and 10 exists below Part 7 or 9, but not which. We know that part 10 exists at both 3 and 4 nodes deep but where? If I look at other data in the table it is possible to tell it should be: - Part 6 - Part 7 - Part 9 - Part 10 - Part 9 - Part 10 I thought one of the benefits of closure tables was there was no need for recursive queries? Could you help explain what I am doing wrong? EDIT: For clarification, this is a mapping table. There is another table called "parts" which has a column called part_id that correlates to both the parentId and childId columns in the "closure" table. The "id" column in the table above (closure) is just for the purposes of maintaining a primary key. It is not really necessary. The methods I have used to create this closure table is described in the following article: http://dirtsimple.org/2010/11/simplest-way-to-do-tree-based-queries.html EDIT2: It can have two and three hops. I will explain easier by assigning names to the items. Part 6 = Bicycle Part 7 = Gears Part 8 = Chain Part 9 = Bolt Part 10 = Nut Nut is part of Bolt. The Bolt and Nut combo exists directly within Bicycle and within Gears which is part of Bicycle. In relation to what method to use I have looked at Adjacency, Edges, Enum Paths, Closures, DAGS(networks) and the Nested Set Model. I am still trying to work out what is what, but this is an extremely complex component database where there are multiple parents and any modification to a sub-tree must propogate through the other trees. More importantly there will be insertions, deletions and tree views that I wish to avoid recursion during general use, even at the cost of database space and query time during entry.

    Read the article

  • iphone Odd Problem when using a custom cell

    - by Brodie4598
    Please note where I have the NSLOG. All it is displaying in the log is the first three items in the nameSection. After some testing, I discovered it is displaying how many keys there are because if I add a key to the plist, it will log a fourth item in log. nameSection should be an array of the strings that make up the key array in the plist file. the plist file has 3 dictionaries, each with several arrays of strings. The code picks the dictionary I am working with correctly, then should use the array names as sections in the table and the strings en each array as what to display in each cell. so if the dictionary i am working with has 3 arrays, NSLOG will display 3 strings from the first array: 2010-05-01 17:03:26.957 Checklists[63926:207] string0 2010-05-01 17:03:26.960 Checklists[63926:207] string1 2010-05-01 17:03:26.962 Checklists[63926:207] string2 then stop with: 2010-05-01 17:03:26.963 Checklists[63926:207] * Terminating app due to uncaught exception 'NSRangeException', reason: '* -[NSCFArray objectAtIndex:]: index (3) beyond bounds (3)' if i added an array to the dictionary, it log 4 items instead of 3. I hope this explanation makes sense... -(NSInteger)numberOfSectionsInTableView:(UITableView *)tableView{ return [keys count]; } -(NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger) section { NSString *key = [keys objectAtIndex:section]; NSArray *nameSection = [names objectForKey:key]; return [nameSection count]; } -(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { NSUInteger section = [indexPath section]; NSString *key = [keys objectAtIndex: section]; NSArray *nameSection = [names objectForKey:key]; static NSString *SectionsTableIdentifier = @"SectionsTableIdentifier"; static NSString *ChecklistCellIdentifier = @"ChecklistCellIdentifier "; ChecklistCell *cell = (ChecklistCell *)[tableView dequeueReusableCellWithIdentifier: SectionsTableIdentifier]; if (cell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"ChecklistCell" owner:self options:nil]; for (id oneObject in nib) if ([oneObject isKindOfClass:[ChecklistCell class]]) cell = (ChecklistCell *)oneObject; } NSUInteger row = [indexPath row]; NSDictionary *rowData = [self.keys objectAtIndex:row]; NSString *tempString = [[NSString alloc]initWithFormat:@"%@",[nameSection objectAtIndex:row]]; NSLog(@"%@",tempString); cell.colorLabel.text = [tempArray objectAtIndex:0]; cell.nameLabel.text = [tempArray objectAtIndex:1]; return cell; return cell; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = [tableView cellForRowAtIndexPath:indexPath]; if (cell.accessoryType == UITableViewCellAccessoryNone) { cell.accessoryType = UITableViewCellAccessoryCheckmark; } else if (cell.accessoryType == UITableViewCellAccessoryCheckmark) { cell.accessoryType = UITableViewCellAccessoryNone; } [tableView deselectRowAtIndexPath:indexPath animated:NO]; } -(NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section{ NSString *key = [keys objectAtIndex:section]; return key; }

    Read the article

  • What Amazon S3 .NET Library is most useful and efficient?

    - by Geo
    There are two main open source .net Amazon S3 libraries. Three Sharp LitS3 I am currently using LitS3 in our MVC demo project, but there is some criticism about it. Has anyone here used both libraries so they can give an objective point of view. Below some sample calls using LitS3: On demo controller: private S3Service s3 = new S3Service() { AccessKeyID = "Thekey", SecretAccessKey = "testing" }; public ActionResult Index() { ViewData["Message"] = "Welcome to ASP.NET MVC!"; return View("Index",s3.GetAllBuckets()); } On demo view: <% foreach (var item in Model) { %> <p> <%= Html.Encode(item.Name) %> </p> <% } %> EDIT 1: Since I have to keep moving and there is no clear indication of what library is more effective and kept more up to date, I have implemented a repository pattern with an interface that will allow me to change library if I need to in the future. Below is a section of the S3Repository that I have created and will let me change libraries in case I need to: using LitS3; namespace S3Helper.Models { public class S3Repository : IS3Repository { private S3Service _repository; #region IS3Repository Members public IQueryable<Bucket> FindAllBuckets() { return _repository.GetAllBuckets().AsQueryable(); } public IQueryable<ListEntry> FindAllObjects(string BucketName) { return _repository.ListAllObjects(BucketName).AsQueryable(); } #endregion If you have any information about this question please let me know in a comment, and I will get back and edit the question. EDIT 2: Since this question is not getting attention, I integrated both libraries in my web app to see the differences in design, I know this is probably a waist of time, but I really want a good long run solution. Below you will see two samples of the same action with the two libraries, maybe this will motivate some of you to let me know your thoughts. WITH THREE SHARP LIBRARY: public IQueryable<T> FindAllBuckets<T>() { List<string> list = new List<string>(); using (BucketListRequest request = new BucketListRequest(null)) using (BucketListResponse response = service.BucketList(request)) { XmlDocument bucketXml = response.StreamResponseToXmlDocument(); XmlNodeList buckets = bucketXml.SelectNodes("//*[local-name()='Name']"); foreach (XmlNode bucket in buckets) { list.Add(bucket.InnerXml); } } return list.Cast<T>().AsQueryable(); } WITH LITS3 LIBRARY: public IQueryable<T> FindAllBuckets<T>() { return _repository.GetAllBuckets() .Cast<T>() .AsQueryable(); }

    Read the article

  • Is this an idiomatic way to pass mocks into objects?

    - by Billy ONeal
    I'm a bit confused about passing in this mock class into an implementation class. It feels wrong to have all this explicitly managed memory flying around. I'd just pass the class by value but that runs into the slicing problem. Am I missing something here? Implementation: namespace detail { struct FileApi { virtual HANDLE CreateFileW( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile ) { return ::CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } virtual void CloseHandle(HANDLE handleToClose) { ::CloseHandle(handleToClose); } }; } class File : boost::noncopyable { HANDLE hWin32; boost::scoped_ptr<detail::FileApi> fileApi; public: File( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile, __in detail::FileApi * method = new detail::FileApi() ) { fileApi.reset(method); hWin32 = fileApi->CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } }; namespace detail { struct FileApi { virtual HANDLE CreateFileW( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile ) { return ::CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } virtual void CloseHandle(HANDLE handleToClose) { ::CloseHandle(handleToClose); } }; } class File : boost::noncopyable { HANDLE hWin32; boost::scoped_ptr<detail::FileApi> fileApi; public: File( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile, __in detail::FileApi * method = new detail::FileApi() ) { fileApi.reset(method); hWin32 = fileApi->CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } ~File() { fileApi->CloseHandle(hWin32); } }; Tests: namespace detail { struct MockFileApi : public FileApi { MOCK_METHOD7(CreateFileW, HANDLE(LPCWSTR, DWORD, DWORD, LPSECURITY_ATTRIBUTES, DWORD, DWORD, HANDLE)); MOCK_METHOD1(CloseHandle, void(HANDLE)); }; } using namespace detail; using namespace testing; TEST(Test_File, OpenPassesArguments) { MockFileApi * api = new MockFileApi; EXPECT_CALL(*api, CreateFileW(Eq(L"BozoFile"), Eq(56), Eq(72), Eq(reinterpret_cast<LPSECURITY_ATTRIBUTES>(67)), Eq(98), Eq(102), Eq(reinterpret_cast<HANDLE>(98)))) .Times(1).WillOnce(Return(reinterpret_cast<HANDLE>(42))); File test(L"BozoFile", 56, 72, reinterpret_cast<LPSECURITY_ATTRIBUTES>(67), 98, 102, reinterpret_cast<HANDLE>(98), api); }

    Read the article

  • Decrypting data from a secure socket

    - by Ronald
    I'm working on a server application in Java. I've successfully got past the handshake portion of the communication process, but how do I go about decrypting my input stream? Here is how I set up my server: import java.io.IOException; import java.net.ServerSocket; import java.net.Socket; import java.util.ArrayList; import java.util.HashMap; import javax.net.ServerSocketFactory; import javax.net.ssl.SSLServerSocket; import javax.net.ssl.SSLServerSocketFactory; import org.json.me.JSONException; import dictionary.Dictionary; public class Server { private static int port = 1234; public static void main(String[] args) throws JSONException { System.setProperty("javax.net.ssl.keyStore", "src/my.keystore"); System.setProperty("javax.net.ssl.keyStorePassword", "test123"); System.out.println("Starting server on port: " + port); HashMap<String, Game> games = new HashMap<String, Game>(); final String[] enabledCipherSuites = { "SSL_RSA_WITH_RC4_128_SHA" }; try{ SSLServerSocketFactory socketFactory = (SSLServerSocketFactory) SSLServerSocketFactory.getDefault(); SSLServerSocket listener = (SSLServerSocket) socketFactory.createServerSocket(port); listener.setEnabledCipherSuites(enabledCipherSuites); Socket server; Dictionary dict = new Dictionary(); Game game = new Game(dict); //for testing, creates 1 global game. while(true){ server = listener.accept(); ClientConnection conn = new ClientConnection(server, game, "User"); Thread t = new Thread(conn); t.start(); } } catch(IOException e){ System.out.println("Failed setting up on port: " + port); e.printStackTrace(); } } } I used a BufferedReader to get the data send from the client: BufferedReader d = new BufferedReader(new InputStreamReader(socket.getInputStream())); After the handshake is complete it appears like I'm getting encrypted data. I did some research online and it seems like I might need to use a Cipher, but I'm not sure. Any ideas?

    Read the article

  • javascript simple object creation test: opera leaks?

    - by joe
    Hi, I am trying to figure out certain memory leak conditions in javascript on a few browsers. Currently I'm only testing FF 3.6, Opera 10.10, and Safari 4.0.3. I've started with a fairly simple test, and can confirm no memory leaks in Firefox and Safari. But Opera just takes memory and never gives it back. What gives? Here's the test: <html> <head> <script type="text/javascript"> window.onload = init; //window.onunload = cleanup; var a=[]; function init() { var d = document.createElement('div'); d.innerHTML = "page loading..."; document.body.appendChild(d); for (var i=0; i<400000; i++) { a[i] = new Obj("xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"); } d.innerHTML = "PAGE LOADED"; } function cleanup() { for (var i=0; i<400000; i++) { a[i] = null; } } function Obj(msg) { this.msg=msg; } </script> </head> <body> </body> </html> I shouldn't need the cleanup() call on window.unload, but tried that also. No luck. As you can see this is simple JS, no circular DOM links, no closures. I monitor the memory usage using 'top' on Mac 10.4.11. Memory usage spikes up on page load, as expected. In FF and Safari reloading the page does not use any further memory, and all memory is returned when the window (tab) is closed. In Opera, memory spikes on load, and seems to also spike further on each reload (but not always...). But regardless of reload, memory never goes back down below the initial load spike. I had hoped this was a no-brainer test that all browsers would pass, so I could move on to more "interesting" conditions. Am I doing something wrong here? Or is this a known Opera issue? Thanks! -joe

    Read the article

  • Flash Buttons Don't Work: TypeError: Error #1009: Cannot access a property or method of a null objec

    - by goldenfeelings
    I've read through several threads about this error, but haven't been able to apply it to figure out my situation... My flash file is an approx 5 second animation. Then, the last keyframe of each layer (frame #133) has a button in it. My flash file should stop on this last key frame, and you should be able to click on any of the 6 buttons to navigate to another html page in my website. Here is the Action Script that I have applied to the frame in which the buttons exist (on a separate layer, see screenshot at: http://www.footprintsfamilyphoto.com/wp-content/themes/Footprints/images/flash_buttonissue.jpg stop (); function babieschildren(event:MouseEvent):void { trace("babies children method was called!!!"); var targetURL:URLRequest = new URLRequest("http://www.footprintsfamilyphoto.com/portfolio/babies-children"); navigateToURL(targetURL, "_self"); } bc_btn1.addEventListener(MouseEvent.CLICK, babieschildren); bc_btn2.addEventListener(MouseEvent.CLICK, babieschildren); function fams(event:MouseEvent):void { trace("families method was called!!!"); var targetURL:URLRequest = new URLRequest("http://www.footprintsfamilyphoto.com/portfolio/families"); navigateToURL(targetURL, "_self"); } f_btn1.addEventListener(MouseEvent.CLICK, fams); f_btn2.addEventListener(MouseEvent.CLICK, fams); function couplesweddings(event:MouseEvent):void { trace("couples weddings method was called!!!"); var targetURL:URLRequest = new URLRequest("http://www.footprintsfamilyphoto.com/portfolio/couples-weddings"); navigateToURL(targetURL, "_self"); } cw_btn1.addEventListener(MouseEvent.CLICK, couplesweddings); cw_btn2.addEventListener(MouseEvent.CLICK, couplesweddings); When I test the movie, I get this error in the output box: "TypeError: Error #1009: Cannot access a property or method of a null object reference." The test movie does stop on the appropriate frame, but the buttons don't do anything (no URL is opened, and the trace statements don't show up in the output box when the buttons are clicked on the test movie). You can view the .swf file here: www.footprintsfamilyphoto.com/portfolio I'm confident that all 6 buttons do exist in the appropriate frame (frame 133), so I don't think that's what's causing the 1009 error. I also tried deleting each of the three function/addEventListener sections one at a time and testing, and I still got the 1009 error every time. If I delete ALL of the action script except for the "stop ()" line, then I do NOT get the 1009 error. Any ideas?? I'm very new to Flash, so if I haven't clarified something that I need to, let me know!

    Read the article

  • HttpPost request unsuccessful

    - by The Thom
    I have written a web service and am now writing a tester to perform integration testing from the outside. I am writing my tester using apache httpclient 4.3. Based on the code here: http://hc.apache.org/httpcomponents-client-4.3.x/quickstart.html and here: http://hc.apache.org/httpcomponents-client-4.3.x/tutorial/html/fundamentals.html#d5e186 I have written the following code. Map<String, String> parms = new HashMap<>(); parms.put(AirController.KEY_VALUE, json); postUrl(SERVLET, parms); ... protected String postUrl(String servletName, Map<String, String> parms) throws AirException{ String url = rootUrl + servletName; HttpPost post = new HttpPost(url); List<NameValuePair> nvps = new ArrayList<NameValuePair>(); for(Map.Entry<String, String> entry:parms.entrySet()){ BasicNameValuePair parm = new BasicNameValuePair(entry.getKey(), entry.getValue()); nvps.add(parm); } try { post.setEntity(new UrlEncodedFormEntity(nvps)); } catch (UnsupportedEncodingException use) { String msg = "Invalid parameters:" + parms; throw new AirException(msg, use); } CloseableHttpResponse response; try { response = httpclient.execute(post); } catch (IOException ioe) { throw new AirException(ioe); } String result; if(HttpStatus.SC_OK == response.getStatusLine().getStatusCode()){ result = processResponse(response); } else{ String msg = MessageFormat.format("Invalid status code {0} received from query {1}.", new Object[]{response.getStatusLine().getStatusCode(), url}); throw new AirException(msg); } return result; } This code successfully reaches my servlet. In my servlet, I have (using Spring's AbstractController): protected ModelAndView post(HttpServletRequest request, HttpServletResponse response) { String json = String.valueOf(request.getParameter(KEY_VALUE)); if(json.equals("null")){ log.info("Received null value."); response.setStatus(406); return null; } And this code always falls into the null parameter code and returns a 406. I'm sure I'm missing something simple, but can't see what it is.

    Read the article

  • Getting <divs>'s to align next to each other

    - by user1322845
    I have the following code I am trying to get working correctly: <div id="newspost_bg"> <article> <p> <header><h3>The fast red fox!</h3></header> This is where the article resides in the article tag. This is good for SEO optimization. <footer>Read More..</footer> </p> </article> </div> <div id="newspost_bg"> hello </div> <div id="newspost_bg"> hello </div> <div id="advertisement"> <script type="text/javascript"><!-- google_ad_client = "ca-pub-2139701283631933"; /* testing site */ google_ad_slot = "4831288817"; google_ad_width = 120; google_ad_height = 600; //--> </script> </div> Here is the css that goes with it: #newspost_bg{ position: relative; background-color: #d9dde1; width:700px; height:250px; margin: 10px; margin-left: 20px; border: solid 10px #1d2631; float:left; } #newspost_bg article{ position: relative; margin-left: 20px; } #advertisement{ float: left; background-color: #d9dde1; width: 125px; height: 605px; margin: 10px; } The problem I'm experiencing is that the advertisements im trying to get setup will align with the last with the id of newspost_bg but im looking to havce it align to the top of the container it is in. I dont know if this is enough info, if not please let me know what you might need. Im new to the web coding scene so any and all critiques help me.

    Read the article

  • How to successfully implement og:image for the LinkedIn

    - by Sabo
    THE PROBLEM: I am trying, without much success, to implement open graph image on site: http://www.guarenty-group.com/cz/ The homepage is completeply bypassing the og:image tag, where internal pages are reading all images from the site and place og:image as the last option. Other social networks are working fine on both internal pages and homepage. THE CONFIGURATION: I have no share buttons or alike, all I want is to be able to share the link via my profile. The image is well over 300x300px: http://guarenty-group.com/img/gg_seal.png Here is how my head tag looks like: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Guarenty Group : Pojištení pro nájemce a pronajímatelé</title> <meta name="keywords" content="" /> <meta name="description" content="Guarenty Group pojištuje príjem z nájmu pronajímatelum, kauci nájemcum - aby nemuseli platit velkou cástku v hotovostí predem - a dále nájemcum pojištuje príjmy, aby meli na nájem pri nemoci, úrazu ci nezamestnání." /> <meta name="image_src" content="http://guarenty-group.com/img/gg_seal.png" /> <meta name="image_url" content="http://guarenty-group.com/img/gg_seal.png" /> <meta property="og:title" content="Pojištení pro nájemce a pronajímatelé" /> <meta property="og:url" content="http://guarenty-group.com/cz/" /> <meta property="og:image" content="http://guarenty-group.com/img/gg_seal.png" /> <meta property="og:description" content="Guarenty Group pojištuje príjem z nájmu pronajímatelum, kauci nájemcum - aby nemuseli platit velkou cástku v hotovostí predem - a dále nájemcum pojištuje príjmy, aby meli na nájem pri nemoci, úrazu ci nezamestnání [...]" /> ... </head> THE TESTING RESULTS: In order to trick the cache i have tested the site with http://www.guarenty-group.com/cz/?try=N, where I have changed the N every time. The strange thing is that images found for different value of N is different. Sometimes there is no image, sometimes there is 1, 2 or 3 images, but each time there is a different set of images. But, in any case I could not find the image specified in the og:graph! MY QUESTIONS: https://developer.linkedin.com/documents/setting-display-tags-shares is saying one thing, and the personnel on the support forum is saying "over 300" Does anyone know What is the official minimum dimension of the image (both w and h)? Can an image be too large? Should I use the xmlns, should I not use xmlns or it doesn't matter? What are the maximum (and minimum) lengths for og:title and og:description tags? Any other suggestion is of course welcomed :) Thanks in advance, cheers~

    Read the article

  • How to enable UAC prompt through programming

    - by peter
    I want to implement a UAC prompt for an application in visualc++ the operating system is 32bit x7460(2processor) Windowsserver 2008 the exe is myproject.exe through manifest.. Here for testing i wl build the application in Windows XP OS and copy the exe in to system containg the Windowsserver 2008 machine and replace it So what i did is i added a manifest like this name of that is myproject.exe.manifest My project has 3 folders like Headerfile,Resourcefile and Source file.I added this manifest(myproject.exe.manifest) in the Sourcefile folder containing other cpp and c code <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="11.1.4.0" processorArchitecture="X7460" name="myproject" type="win32"/> <description>myproject Problem</description> <!-- Identify the application security requirements. --> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> <security> <requestedPrivileges> <requestedExecutionLevel level="requireAdministrator" uiAccess="false"/> </requestedPrivileges> </security> </trustInfo> </assembly> then i added this line of code in Resourcefile(.rc).Means one header file is there(Myproject.h).I added the line of code there #define MANIFEST_RESOURCE_ID 1 MANIFEST_RESOURCE_ID RT_MANIFEST "myproject.exe.manifest" Finally i did the following step 1. Open your project in Microsoft Visual Studio 2005. 2. Under Project, select Properties. 3. In Properties, select Manifest Tool, and then select Input and Output. 4. Add in the name of your application manifest file under Additional manifest files. 5. Rebuild your application. But i am getting lot of Syntax errors Is there any problems in the way which i followed.If i commented the line define MANIFEST_RESOURCE_ID 1 MANIFEST_RESOURCE_ID RT_MANIFEST "myproject.exe.manifest" which added in Myproject.h for adding values in .rc file there willnot any error other than this general error c1010070: Failed to load and parse the manifest. The system cannot find the file specified. .\myproject.exe.manifest How to enable UAC prompt through programming

    Read the article

  • QTreeView memory consumption

    - by Eye of Hell
    Hello. I'm testing QTreeView functionality right now, and i was amazed by one thing. It seems that QTreeView memory consumption depends on items count O_O. This is highly unusual, since model-view containers of such type only keeps track for items being displayed, and rest of items are in the model. I have written a following code with a simple model that holds no data and just reports that it has 10 millions items. With MFC, Windows API or .NET tree / list with such model will take no memory, since it will display only 10-20 visible elements and will request model for more upon scrolling / expanding items. But with Qt, such simple model results in ~300Mb memory consumtion. Increasing number of items will increase memory consumption. Maybe anyone can hint me what i'm doing wrong? :) #include <QtGui/QApplication> #include <QTreeView> #include <QAbstractItemModel> class CModel : public QAbstractItemModel { public: QModelIndex index ( int i_nRow, int i_nCol, const QModelIndex& i_oParent = QModelIndex() ) const { return createIndex( i_nRow, i_nCol, 0 ); } public: QModelIndex parent ( const QModelIndex& i_oInex ) const { return QModelIndex(); } public: int rowCount ( const QModelIndex& i_oParent = QModelIndex() ) const { return i_oParent.isValid() ? 0 : 1000 * 1000 * 10; } public: int columnCount ( const QModelIndex& i_oParent = QModelIndex() ) const { return 1; } public: QVariant data ( const QModelIndex& i_oIndex, int i_nRole = Qt::DisplayRole ) const { return Qt::DisplayRole == i_nRole ? QVariant( "1" ) : QVariant(); } }; int main(int argc, char *argv[]) { QApplication a(argc, argv); QTreeView oWnd; CModel oModel; oWnd.setUniformRowHeights( true ); oWnd.setModel( & oModel ); oWnd.show(); return a.exec(); }

    Read the article

  • Best way to test for a variable's existence in PHP; isset() is clearly broken

    - by chazomaticus
    From the isset() docs: isset() will return FALSE if testing a variable that has been set to NULL. Basically, isset() doesn't check for whether the variable is set at all, but whether it's set to anything but NULL. Given that, what's the best way to actually check for the existence of a variable? I tried something like: if(isset($v) || @is_null($v)) (the @ is necessary to avoid the warning when $v is not set) but is_null() has a similar problem to isset(): it returns TRUE on unset variables! It also appears that: @($v === NULL) works exactly like @is_null($v), so that's out, too. How are we supposed to reliably check for the existence of a variable in PHP? Edit: there is clearly a difference in PHP between variables that are not set, and variables that are set to NULL: <?php $a = array('b' => NULL); var_dump($a); PHP shows that $a['b'] exists, and has a NULL value. If you add: var_dump(isset($a['b'])); var_dump(isset($a['c'])); you can see the ambiguity I'm talking about with the isset() function. Here's the output of all three of these var_dump()s: array(1) { ["b"]=> NULL } bool(false) bool(false) Further edit: two things. One, a use case. An array being turned into the data of an SQL UPDATE statement, where the array's keys are the table's columns, and the array's values are the values to be applied to each column. Any of the table's columns can hold a NULL value, signified by passing a NULL value in the array. You need a way to differentiate between an array key not existing, and an array's value being set to NULL; that's the difference between not updating the column's value and updating the column's value to NULL. Second, Zoredache's answer, array_key_exists() works correctly, for my above use case and for any global variables: <?php $a = NULL; var_dump(array_key_exists('a', $GLOBALS)); var_dump(array_key_exists('b', $GLOBALS)); outputs: bool(true) bool(false) Since that properly handles just about everywhere I can see there being any ambiguity between variables that don't exist and variables that are set to NULL, I'm calling array_key_exists() the official easiest way in PHP to truly check for the existence of a variable. (Only other case I can think of is for class properties, for which there's property_exists(), which, according to its docs, works similarly to array_key_exists() in that it properly distinguishes between not being set and being set to NULL.)

    Read the article

  • WLI domain with 3 servers - issues on JPD process startup

    - by XpiritO
    Hi there. I'm currently working on a clustered WLI environment which comprehends 3 servers: 1 admin server ("AdminServer") and 2 managed servers ("mn1" and "mn2") grouped as a cluster, as follows: Architecture diagram: http://img72.imageshack.us/img72/4112/clusterdiagram.jpg I've developed a JPD process to execute some scheduled tasks, invoked using a Message Broker. I've deployed this project into a single-server WLI domain (with AdminServer only) and it works as expected: the JPD process is invoked (I've configured a Timer Event Generator instance to start it up). Message broker: http://img532.imageshack.us/img532/1443/wlimessagebroker.jpg Timer event generator: http://img408.imageshack.us/img408/7358/wlitimereventgenerator.jpg In order to achieve fail-over and load-balancing capabilities, I'm currently trying to deploy this JPD process into this clustered WLI environment. Although, I'm having some issues with this, as I cannot get it to work properly, even if it still works. Here is a screenshot of the "WLI Process Instance Monitor" (with AdminServer and mn1 instances up and running): http://img710.imageshack.us/img710/8477/wliprocessinstancemonit.jpg According to this screen the process seems to be running, as it shows in this instance monitor screen. However, I don't see any output coming out neither at AdminServer console or mn1 console. In single-server domain it was visible output from JPD process "timeout" callback method, wich implementation is shown below: @com.bea.wli.control.broker.MessageBroker.StaticSubscription(xquery = "", filterValueMatch = "", channelName = "/SamplePrefix/Samples/SampleStringChannel", messageBody = "{x0}") public void subscription(java.lang.String x0) { String toReturn=""; try { Context myCtx = new InitialContext(); MBeanHome mbeanHome = (MBeanHome)myCtx.lookup("weblogic.management.home.localhome"); toReturn=mbeanHome.getMBeanServer().getServerName(); System.out.println("**** executed at **** " + System.currentTimeMillis() + " by: " + toReturn); } catch (Exception e) { System.out.println("Exception!"); e.printStackTrace(); } } (...) @org.apache.beehive.controls.api.events.EventHandler(field = "myT", eventSet = com.bea.control.WliTimerControl.Callback.class, eventName = "onTimeout") public void myT_onTimeout(long time, java.io.Serializable data) { // #START: CODE GENERATED - PROTECTED SECTION - you can safely add code above this comment in this method. #// // input transform System.out.println("**** published at **** " + System.currentTimeMillis()); publishControl.publish("aaaa"); // parameter assignment // #END : CODE GENERATED - PROTECTED SECTION - you can safely add code below this comment in this method. #// } and here is the output visible at "AdminServer" console in single-server domain testing: **** published at **** 1273238090713 **** executed at **** 1273238132123 by: AdminServer **** published at **** 1273238152462 **** executed at **** 1273238152562 by: AdminServer (...) What may be wrong with my clustered configuration? Am I missing something to accomplish clustered deployment? Thanks in advance for your help.

    Read the article

  • Finding N contiguous zero bits in an integer to the left of the MSB position of another integer

    - by James Morris
    The problem is: given an integer val1 find the position of the highest bit set (Most Significant Bit) then, given a second integer val2 find a contiguous region of unset bits, with the minimum number of zero bits given by width to the left of the position (ie, in the higher bits). Here is the C code for my solution: typedef unsigned int t; unsigned const t_bits = sizeof(t) * CHAR_BIT; _Bool test_fit_within_left_of_msb( unsigned width, t val1, t val2, unsigned* offset_result) { unsigned offbit = 0; unsigned msb = 0; t mask; t b; while(val1 >>= 1) ++msb; while(offbit + width < t_bits - msb) { mask = (((t)1 << width) - 1) << (t_bits - width - offbit); b = val2 & mask; if (!b) { *offset_result = offbit; return true; } if (offbit++) /* this conditional bothers me! */ b <<= offbit - 1; while(b <<= 1) offbit++; } return false; } Aside from faster ways of finding the MSB of the first integer, the commented test for a zero offbit seems a bit extraneous, but necessary to skip the highest bit of type t if it is set. I have also implemented similar algorithms but working to the right of the MSB of the first number, so they don't require this seemingly extra condition. How can I get rid of this extra condition, or even, are there far more optimal solutions? Edit: Some background not strictly required. The offset result is a count of bits from the high bit, not from the low bit as maybe expected. This will be part of a wider algorithm which scans a 2D array for a 2D area of zero bits. Here, for testing, the algorithm has been simplified. val1 represents the first integer which does not have all bits set found in a row of the 2D array. From this the 2D version would scan down which is what val2 represents. Here's some output showing success and failure: t_bits:32 t_high: 10000000000000000000000000000000 ( 2147483648 ) --------- ----------------------------------- *** fit within left of msb test *** ----------------------------------- val1: 00000000000000000000000010000000 ( 128 ) val2: 01000001000100000000100100001001 ( 1091569929 ) msb: 7 offbit:0 + width: 8 = 8 mask: 11111111000000000000000000000000 ( 4278190080 ) b: 01000001000000000000000000000000 ( 1090519040 ) offbit:8 + width: 8 = 16 mask: 00000000111111110000000000000000 ( 16711680 ) b: 00000000000100000000000000000000 ( 1048576 ) offbit:12 + width: 8 = 20 mask: 00000000000011111111000000000000 ( 1044480 ) b: 00000000000000000000000000000000 ( 0 ) offbit:12 iters:10 ***** found room for width:8 at offset: 12 ***** ----------------------------------- *** fit within left of msb test *** ----------------------------------- val1: 00000000000000000000000001000000 ( 64 ) val2: 00010000000000001000010001000001 ( 268469313 ) msb: 6 offbit:0 + width: 13 = 13 mask: 11111111111110000000000000000000 ( 4294443008 ) b: 00010000000000000000000000000000 ( 268435456 ) offbit:4 + width: 13 = 17 mask: 00001111111111111000000000000000 ( 268402688 ) b: 00000000000000001000000000000000 ( 32768 ) ***** mask: 00001111111111111000000000000000 ( 268402688 ) offbit:17 iters:15 ***** no room found for width:13 ***** (iters is the count of iterations of the inner while loop)

    Read the article

  • c++ class member functions selected by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched stackoverflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instatiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state info was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice?

    Read the article

  • Performance of SHA-1 Checksum from Android 2.2 to 2.3 and Higher

    - by sbrichards
    In testing the performance of: package com.srichards.sha; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; import java.io.IOException; import java.io.InputStream; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.zip.ZipEntry; import java.util.zip.ZipFile; import com.srichards.sha.R; public class SHAHashActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); TextView tv = new TextView(this); String shaVal = this.getString(R.string.sha); long systimeBefore = System.currentTimeMillis(); String result = shaCheck(shaVal); long systimeResult = System.currentTimeMillis() - systimeBefore; tv.setText("\nRunTime: " + systimeResult + "\nHas been modified? | Hash Value: " + result); setContentView(tv); } public String shaCheck(String shaVal){ try{ String resultant = "null"; MessageDigest digest = MessageDigest.getInstance("SHA1"); ZipFile zf = null; try { zf = new ZipFile("/data/app/com.blah.android-1.apk"); // /data/app/com.blah.android-2.apk } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } ZipEntry ze = zf.getEntry("classes.dex"); InputStream file = zf.getInputStream(ze); byte[] dataBytes = new byte[32768]; //65536 32768 int nread = 0; while ((nread = file.read(dataBytes)) != -1) { digest.update(dataBytes, 0, nread); } byte [] rbytes = digest.digest(); StringBuffer sb = new StringBuffer(""); for (int i = 0; i< rbytes.length; i++) { sb.append(Integer.toString((rbytes[i] & 0xff) + 0x100, 16).substring(1)); } if (shaVal.equals(sb.toString())) { resultant = ("\nFalse : " + "\nFound:\n" + sb.toString() + "|" + "\nHave:\n" + shaVal); } else { resultant = ("\nTrue : " + "\nFound:\n" + sb.toString() + "|" + "\nHave:\n" + shaVal); } return resultant; } catch (IOException e) { e.printStackTrace(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } return null; } } On a 2.2 Device I get average runtime of ~350ms, while on newer devices I get runtimes of 26-50ms which is substantially lower. I'm keeping in mind these devices are newer and have better hardware but am also wondering if the platform and the implementation affect performance much and if there is anything that could reduce runtimes on 2.2 devices. Note, the classes.dex of the .apk being accessed is roughly 4MB. Thanks!

    Read the article

  • Usage of Document() function in XSLT 1.0

    - by infant programmer
    I am triggering the transformation using a .NET code, unless I add "EnableDocumentFunction" property to the XSL-Setting, the program throws error saying .. "Usage of Document() function is prohibited", Actually the program is not editable and a kind of read-only .. is it possible to edit the XSL code itself so that I can use document() function?? The sample XSL and XMLs are Here: Sample XML : <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl" > <xsl:output method="xml" indent="yes"/> <xsl:template match="@* | node()"> <xsl:copy> <xsl:apply-templates select="@* | node()"/> </xsl:copy> </xsl:template> <xsl:variable name="State_Code_Trn"> <State In="California" Out="CA"/> <State In="CA" Out="CA"/> <State In="Texas" Out="TX"/> <State In="TX" Out="TX"/> </xsl:variable> <xsl:template name="testing" match="test_node"> <xsl:variable name="test_val"> <xsl:value-of select="."/> </xsl:variable> <xsl:element name="{name()}"> <xsl:choose> <xsl:when test="document('')/*/xsl:variable[@name='State_Code_Trn'] /State[@In=$test_val]"> <xsl:value-of select="document('')/*/xsl:variable[@name='State_Code_Trn'] /State[@In=$test_val]/@Out"/> </xsl:when> <xsl:otherwise> <xsl:text>Other</xsl:text> </xsl:otherwise> </xsl:choose> </xsl:element> </xsl:template> </xsl:stylesheet> And the sample XML : <?xml version="1.0" encoding="utf-8"?> <root> <test_node>California</test_node> <test_node>CA</test_node> <test_node> CA</test_node> <test_node>Texas</test_node> <test_node>TX</test_node> <test_node>CAA</test_node> <test_node></test_node> </root>

    Read the article

  • How to clean and simplify this code?

    - by alkalim
    After thinking about This Question and giving an answer to it I wanted to do more about that to train myself. So I wrote a function which will calc the length of an given function. Th given php-file has to start at the beginning of the needed function. Example: If the function is in a big phpfile with lots of functions, like /* lots of functions */ function f_interesting($arg) { /* function */ } /* lots of other functions */ then $part3 of my function will require to begin like that (after the starting-{ of the interesting function): /* function */ } /* lots of other functions */ Now that's not the problem, but I would like to know if there are an cleaner or simplier ways to do this. Here's my function: (I already cleaned a lot of testing-echo-commands) (The idea behind it is explained here) function f_analysis ($part3) { if(isset($part3)) { $char_array = str_split($part3); //get array of chars $end_key = false; //length of function $depth = 0; //How much of unclosed '{' $in_sstr = false; //is next char inside in ''-String? $in_dstr = false; //is nect char inside an ""-String? $in_sl_comment = false; //inside an //-comment? $in_ml_comment = false; //inside an /* */-comment? $may_comment = false; //was the last char an '/' which can start a comment? $may_ml_comment_end = false; //was the last char an '*' which may end a /**/-comment? foreach($char_array as $key=>$char) { if($in_sstr) { if ($char == "'") { $in_sstr = false; } } else if($in_dstr) { if($char == '"') { $in_dstr = false; } } else if($in_sl_comment) { if($char == "\n") { $in_sl_comment = false; } } else if($in_ml_comment) { if($may_ml_comment_end) { $may_ml_comment_end = false; if($char == '/') { $in_ml_comment = false; } } if($char == '*') { $may_ml_comment_end = true; } } else if ($may_comment) { if($char == '/') { $in_sl_comment = true; } else if($char == '*') { $in_ml_comment = true; } $may_comment = false; } else { switch ($char) { case '{': $depth++; break; case '}': $depth--; break; case '/': $may_comment = true; break; case '"': $in_dstr = true; break; case "'": $in_sstr = true; break; } } if($depth < 0) { $last_key = $key; break; } } } else echo '<br>$part3 of f_analysis not set!'; return ($last_key===false) ? false : $last_key+1; //will be false or the length of the function }

    Read the article

  • WPF Binding when setting DataTemplate Programically

    - by Daniel
    Hello, I have my little designer tool (my program). On the left side I have TreeView and on the right site I have Accordion. When I select a node I want to dynamically build Accordion Items based on Properties from DataContext of selected node. Selecting nodes works fine, and when I use this sample code for testing it works also. XAML code: <layoutToolkit:Accordion x:Name="accPanel" SelectionMode="ZeroOrMore" SelectionSequence="Simultaneous"> <layoutToolkit:AccordionItem Header="Controller Info"> <StackPanel Orientation="Horizontal" DataContext="{Binding}"> <TextBlock Text="Content:" /> <TextBlock Text="{Binding Path=Name}" /> </StackPanel> </layoutToolkit:AccordionItem> </layoutToolkit:Accordion> C# code: private void treeSceneNode_SelectedItemChanged(object sender, RoutedPropertyChangedEventArgs<object> e) { if (e.NewValue != e.OldValue) { if (e.NewValue is SceneNode) { accPanel.DataContext = e.NewValue; //e.NewValue is a class that contains Name property } } } But the problem occurs when I'm trying to achive this using DateTemplate and dynamically build AccordingItem, the Binding is not working: <layoutToolkit:Accordion x:Name="accPanel" SelectionMode="ZeroOrMore" SelectionSequence="Simultaneous"> </layoutToolkit:Accordion> and DateTemplate in my ResourceDictionary <DataTemplate x:Key="dtSceneNodeContent"> <StackPanel Orientation="Horizontal" DataContext="{Binding}"> <TextBlock Text="Content:" /> <TextBlock Text="{Binding Path=Name}" /> </StackPanel> </DataTemplate> and C# code: private void treeSceneNode_SelectedItemChanged(object sender, RoutedPropertyChangedEventArgs<object> e) { if (e.NewValue != e.OldValue) { ResourceDictionary rd = new ResourceDictionary(); rd.Source = new Uri("/SilverGL.GUI;component/SilverGLDesignerResourceDictionary.xaml", UriKind.RelativeOrAbsolute); if (e.NewValue is SceneNode) { accPanel.DataContext = e.NewValue; AccordionItem accController = new AccordionItem(); accController.Header = "Controller Info"; accController.ContentTemplate = rd["dtSceneNodeContent"] as DataTemplate; accPanel.Items.Add(accController); } else { // Other type of node } } } I really need help with this issue. Thanks for any support. Daniel

    Read the article

  • Objective-c design advice for use of different data sources, swapping between test and live

    - by user200341
    I'm in the process of designing an application that is part of a larger piece of work, depending on other people to build an API that the app can make use of to retrieve data. While I was thinking about how to setup this project and design the architecture around it, something occurred to me, and I'm sure many people have been in similar situations. Since my work is depending on other people to complete their tasks, and a test server, this slows work down at my end. So the question is: What's the best practice for creating test repositories and classes, implementing them, and not having to depend on altering several places in the code to swap between the test classes and the actual repositories / proper api calls. Contemplate the following scenario: GetDataFromApiCommand *getDataCommand = [[GetDataFromApiCommand alloc]init]; getDataCommand.delegate = self; [getDataCommand getData]; Once the data is available via the API, "GetDataFromApiCommand" could use the actual API, but until then a set of mock data could be returned upon the call of [getDataCommand getData] There might be multiple instances of this, in various places in the code, so replacing all of them wherever they are, is a slow and painful process which inevitably leads to one or two being overlooked. In strongly typed languages we could use dependency injection and just alter one place. In objective-c a factory pattern could be implemented, but is that the best route to go for this? GetDataFromApiCommand *getDataCommand = [GetDataFromApiCommandFactory buildGetDataFromApiCommand]; getDataCommand.delegate = self; [getDataCommand getData]; What is the best practices to achieve this result? Since this would be most useful, even if you have the actual API available, to run tests, or work off-line, the ApiCommands would not necessarily have to be replaced permanently, but the option to select "Do I want to use TestApiCommand or ApiCommand". It is more interesting to have the option to switch between: All commands are test and All command use the live API, rather than selecting them one by one, however that would also be useful to do for testing one or two actual API commands, mixing them with test data. EDIT The way I have chosen to go with this is to use the factory pattern. I set up the factory as follows: @implementation ApiCommandFactory + (ApiCommand *)newApiCommand { // return [[ApiCommand alloc]init]; return [[ApiCommandMock alloc]init]; } @end And anywhere I want to use the ApiCommand class: GetDataFromApiCommand *getDataCommand = [ApiCommandFactory newApiCommand]; When the actual API call is required, the comments can be removed and the mock can be commented out. Using new in the message name implies that who ever uses the factory to get an object, is responsible for releasing it (since we want to avoid autorelease on the iPhone). If additional parameters are required, the factory needs to take these into consideration i.e: [ApiCommandFactory newSecondApiCommand:@"param1"]; This will work quite well with repositories as well.

    Read the article

  • In a PHP project, how do you organize and access your helper objects?

    - by Pekka
    How do you organize and manage your helper objects like the database engine, user notification, error handling and so on in a PHP based, object oriented project? Say I have a large PHP CMS. The CMS is organized in various classes. A few examples: the database object user management an API to create/modify/delete items a messaging object to display messages to the end user a context handler that takes you to the right page a navigation bar class that shows buttons a logging object possibly, custom error handling etc. I am dealing with the eternal question, how to best make these objects accessible to each part of the system that needs it. my first apporach, many years ago was to have a $application global that contained initialized instances of these classes. global $application; $application->messageHandler->addMessage("Item successfully inserted"); I then changed over to the Singleton pattern and a factory function: $mh =&factory("messageHandler"); $mh->addMessage("Item successfully inserted"); but I'm not happy with that either. Unit tests and encapsulation become more and more important to me, and in my understanding the logic behind globals/singletons destroys the basic idea of OOP. Then there is of course the possibility of giving each object a number of pointers to the helper objects it needs, probably the very cleanest, resource-saving and testing-friendly way but I have doubts about the maintainability of this in the long run. Most PHP frameworks I have looked into use either the singleton pattern, or functions that access the initialized objects. Both fine approaches, but as I said I'm happy with neither. I would like to broaden my horizon on what is possible here and what others have done. I am looking for examples, additional ideas and pointers towards resources that discuss this from a long-term, real-world perspective. Also, I'm interested to hear about specialized, niche or plain weird approaches to the issue. Bounty I am following the popular vote in awarding the bounty, the answer which is probably also going to give me the most. Thank you for all your answers!

    Read the article

  • Java app makes screen display unresponsive after 10 minutes of user idle time

    - by Ross
    I've written a Java app that allows users to script mouse/keyboard input (JMacro, link not important, only for the curious). I personally use the application to automate character actions in an online game overnight while I sleep. Unfortunately, I keep coming back to the computer in the morning to find it unresponsive. Upon further testing, I'm finding that my application causes the computer to become unresponsive after about 10 minutes of user idle time (even if the application itself it simulating user activity). I can't seem to pin-point the issue, so I'm hoping somebody else might have a suggestion of where to look or what might be causing the issue. The relevant symptoms and characteristics: Unresponsiveness occurs after user is idle for 10 minutes User can still move the mouse pointer around the screen Everything but the mouse appears frozen... mouse clicks have no effect and no applications update their displays, including the Windows 7 desktop I left the task manager up along the with the app overnight so I could see the last task manager image before the screen freezes... the Java app is at normal CPU/Memory usage and total CPU usage is only ~1% After moving the mouse (in other words, the user comes back from being idle), the screen image starts updating again within 30 minutes (this is very hit and miss... sometimes 10 minutes, sometimes no results after two hours) User can CTRL-ALT-DEL to get to Windows 7's CTRL-ALT-DEL screen (after a 30 second pause). User is still able to move mouse pointer, but clicking any of the button options causes the screen to appear to freeze again On some very rare occasions, the system never freezes, and I come back to it in the morning with full responsiveness The Java app automatically stops input scripting in the middle of the night, so Windows 7 detects "real" idleness and turns the monitors into Standby mode... which they successfully come out of upon manually moving the mouse in the morning when I wake up, even though the desktop display still appears frozen Given the symptoms and characteristics of the issue, it's as if the Java app is causing the desktop display of the logged in user to stop updating, including any running applications. Programming concepts and Java packages used: Multi-threading Standard out and err are rerouted to a javax.swing.JTextArea The application uses a Swing GUI awt.Robot (very heavily used) awt.PointerInfo awt.MouseInfo System Specs: Windows 7 Professional Java 1.6.0 u17 In conclusion, I should stress that I'm not looking for any specific solutions, as I'm not asking a very specific question. I'm just wondering if anybody has run into a similar problem when using the Java libraries that I'm using. I would also gladly appreciate any suggestions for things to try to attempt to further pinpoint what is causing my problem. Thanks! Ross PS, I'll post an update/answer if I manage to stumble across anything else while I continue to debug this.

    Read the article

  • Parsing XML data with Namespaces in PHP

    - by osbmedia
    I'm trying to work with this XML feed that uses namespaces and i'm not able to get past the colon in the tags. Here's how the XML feed looks like: <r25:events pubdate="2010-05-19T13:58:08-04:00"> <r25:event xl:href="event.xml?event_id=328" id="BRJDMzI4" crc="00000022" status="est"> <r25:event_id>328</r25:event_id> <r25:event_name>Testing 09/2005-08/2006</r25:event_name> <r25:alien_uid/> <r25:event_priority>0</r25:event_priority> <r25:event_type_id xl:href="evtype.xml?type_id=105">105</r25:event_type_id> <r25:event_type_name>CABINET</r25:event_type_name> <r25:node_type>C</r25:node_type> <r25:node_type_name>cabinet</r25:node_type_name> <r25:state>1</r25:state> <r25:state_name>Tentative</r25:state_name> <r25:event_locator>2005-AAAAMQ</r25:event_locator> <r25:event_title/> <r25:favorite>F</r25:favorite> <r25:organization_id/> <r25:organization_name/> <r25:parent_id/> <r25:cabinet_id xl:href="event.xml?event_id=328">328</r25:cabinet_id> <r25:cabinet_name>cabinet 09/2005-08/2006</r25:cabinet_name> <r25:start_date>2005-09-01</r25:start_date> <r25:end_date>2006-08-31</r25:end_date> <r25:registration_url/> <r25:last_mod_dt>2008-02-27T14:22:43-05:00</r25:last_mod_dt> <r25:last_mod_user>abc00296004</r25:last_mod_user> </r25:event> </r25:events> And here is what I'm using for code - I'll trying to throw these into a bunch of arrays where I can format the output however I want: <?php $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "http://somedomain.com/blah.xml"); curl_setopt ($ch, CURLOPT_HTTPHEADER, Array("Content-Type: text/xml")); curl_setopt($ch, CURLOPT_USERPWD, "username:password"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $output = curl_exec($ch); curl_close($ch); $xml = new SimpleXmlElement($output); foreach ($xml->events->event as $entry){ $dc = $entry->children('http://www.collegenet.com/r25'); echo $entry->event_name . "<br />"; echo $entry->event_id . "<br /><br />"; }

    Read the article

  • Reading and writing C++ vector to a file

    - by JB
    For some graphics work I need to read in a large amount of data as quickly as possible and would ideally like to directly read and write the data structures to disk. Basically I have a load of 3d models in various file formats which take too long to load so I want to write them out in their "prepared" format as a cache that will load much faster on subsequent runs of the program. Is it safe to do it like this? My worries are around directly reading into the data of the vector? I've removed error checking, hard coded 4 as the size of the int and so on so that i can give a short working example, I know it's bad code, my question really is if it is safe in c++ to read a whole array of structures directly into a vector like this? I believe it to be so, but c++ has so many traps and undefined behavour when you start going low level and dealing directly with raw memory like this. I realise that number formats and sizes may change across platforms and compilers but this will only even be read and written by the same compiler program to cache data that may be needed on a later run of the same program. #include <fstream> #include <vector> using namespace std; struct Vertex { float x, y, z; }; typedef vector<Vertex> VertexList; int main() { // Create a list for testing VertexList list; Vertex v1 = {1.0f, 2.0f, 3.0f}; list.push_back(v1); Vertex v2 = {2.0f, 100.0f, 3.0f}; list.push_back(v2); Vertex v3 = {3.0f, 200.0f, 3.0f}; list.push_back(v3); Vertex v4 = {4.0f, 300.0f, 3.0f}; list.push_back(v4); // Write out a list to a disk file ofstream os ("data.dat", ios::binary); int size1 = list.size(); os.write((const char*)&size1, 4); os.write((const char*)&list[0], size1 * sizeof(Vertex)); os.close(); // Read it back in VertexList list2; ifstream is("data.dat", ios::binary); int size2; is.read((char*)&size2, 4); list2.resize(size2); // Is it safe to read a whole array of structures directly into the vector? is.read((char*)&list2[0], size2 * sizeof(Vertex)); }

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >