Search Results

Search found 7391 results on 296 pages for 'record locking'.

Page 278/296 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • Web Services Primer for a WinForms Developer?

    - by Unicorns
    I've been writing client/server applications with Winforms for about six years now, but I have yet to venture into the web space (neither ASP.NET nor web services). Given the direction that the job market has been heading for some time and the fact that I have a basic curiosity, I'd like to get involved with writing web services, but I don't know where to start. I've read about various options (XML/SOAP vs. JSON, REST vs...well, actually I don't know what it's called, etc.), but I'm not sure what sort of criteria are in play when making the determination to use one or the other. Obviously, I'd like to leverage the tools that I have (Visual Studio, the .NET framework, etc.) without hamstringing myself into only targeting a particular audience (i.e. writing the service in such a way as to make it difficult to consume from a Windows Mobile/Android/iPhone client, for example). For the record, my plan--for now--is to use WCF for my web service development, but I'm open to using another .NET approach if that's advisable. I realize that this question is pretty open-ended so it may get closed, but here are some things I'm wondering: What are some things to consider when choosing the type of web service (REST, etc.) I intend to write? Is it possible (and, if so, feasible) to move from one approach to another? Can web services be written in an event-driven way? As I said I'm a Winforms developer, so I'm used to objects raising events for me to react to. For instance, if I have two clients connected to my service, is there a way for me to "push" information to one of them as a result of an action by the other? If this is possible, is this advisable or am I just not thinking about it correctly? What authentication mechanisms seem to work best for public-facing services? What about if I plan to have different types of OS'es and clients connecting to the service? Is there a generally accepted platform-agnostic approach? In the line of authentication, is this something that I should be doing myself (authenticating an managing sessions, etc.) or is this something should be handled at the framework level and I just define exactly how it should work? If that's the case, how do I tell who the requester has authenticated themselves as? I started writing an authentication mechanism (simple username/password combinations stored in the database and a corresponding session table with a GUID key) within my service and just requiring that key to be passed with every operation (other than logging in, of course), but I want to make sure that I'm not reinventing the wheel here. However, I also don't want to clutter up the server with a bunch of machine user accounts just to use Basic authentication. I'm also under the impression that Digest (and of course Windows) authentication requires a machine (or AD) user account.

    Read the article

  • LINQ Query Returning Multiple Copies Of First Result

    - by Mike G
    I'm trying to figure out why a simple query in LINQ is returning odd results. I have a view defined in the database. It basically brings together several other tables and does some data munging. It really isn't anything special except for the fact that it deals with a large data set and can be a bit slow. I want to query this view based on a long. Two sample queries below show different queries to this view. var la = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).ToList(); var deDa = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).Select(p => new { p.AccountNumber, p.SecurityNumber, p.CUSIP }).ToList(); The first one should hand back a List. The second one will be a list of anonymous objects. When I do these queries in entities framework the first one will hand me back a list of results where they're all exactly the same. The second query will hand me back data where the account number is the one that I queried and the other values differ. This seems to do this on a per account number basis, ie if I were to query for one account number or another all the Position objects for one account would have the same value (the first one in the list of Positions for that account) and the second account would have a set of Position objects that all had the same value (again, the first one in it's list of Position objects). I can write SQL that is in effect the same as either of the two EF queries. They both come back with results (say four) that show the correct data, one account number with different securities numbers. Why does this happen??? Is there something that I could be doing wrong so that if I had four results for the first query above that the first record's data also appears in the 2-4th's objects??? I cannot fathom what would/could be causing this. I've searched Google for all kinds of keywords and haven't seen anyone with this issue. We partial class out the Positions class for added functionality (smart object) and some smart properties. There are even some constructors that provide some view model type support. None of this is invoked in the request (I'm 99% sure of this). However, we do this same pattern all over the app. The only thing I can think of is that the mapping in the EDMX is screwy. Is there a way that this would happen if the "primary keys" in the EDMX were not in fact unique given the way the view is constructed? I'm thinking that the dev who imported this model into the EDMX let the designer auto select what would be unique. Any help would give a haggered dev some hope!

    Read the article

  • How to remove not required Elements from generated XML via jaxb

    - by Dangling Piyush
    I want to know if there is anyway for removing not required elements from generated xml using jaxb.I have my xsd element definition as follows. <xsd:element name="Title" maxOccurs="1" minOccurs="0"> <xsd:annotation> <xsd:documentation> A name given to the digital record. </xsd:documentation> </xsd:annotation> <xsd:simpleType> <xsd:restriction base="xsd:string"> <xsd:minLength value="1"></xsd:minLength> </xsd:restriction> </xsd:simpleType> </xsd:element> As you can see it is not a mandatory element because minOccurs="0" But if it is not empty the length should be 1. <xsd:minLength value="1"></xsd:minLength> At the time of marshalling if I left the Title field blank it is throwing the SAXException because of min-length restriction. So what I want to do is to remove the whole occurrence of <Title/> from generated XML.Right now i have removed the min-length restriction so it is adding the <Title> element as EMPTY <Title></Title> But I do not want it like this.Any help is appreciated.I am using jaxb 2.0 for Marshalling. UPDATE: Following is my variable definiton : private JAXBContext jaxbContext; private Unmarshaller unmarshaller; private SchemaFactory factory; private Schema schema; private Marshaller marshaller; Marshalling code. jaxbContext = JAXBContext.newInstance(ERecordType.class); marshaller = jaxbContext.createMarshaller(); factory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); schema = factory.newSchema((new File(xsdLocation))); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); ERecordType e = new ERecordType(); e.setCataloging(rc); /** * Validate Against Schema. */ marshaller.setSchema(schema); /** * Marshal will throw an exception if XML not validated against * schema. */ marshaller.marshal(e, System.out);

    Read the article

  • How could I send live video stream to remote server from my phone !!!

    - by poc
    Hello , I have a problem about streaming my video to server in real-time from my phone. that is , let my phone be a IP Camera , and server can watch the live video from my phone I have googled many many solutions, but there is no one can solve my problem. I use MediaRecorder to record . it can save video file in the SD card correctly. then , I refered this page and used some method as followings skt = new Socket(InetAddress.getByName(hostname),port); pfd =ParcelFileDescriptor.fromSocket(skt); mediaRecorder.setOutputFile(pfd.getFileDescriptor()); now it seems I can send the video stream while recording however, I wrote a receiver-side program to receive the video stream from Android , but it doesn't work . is there any error? I can receive file , but I can not open the video file . I guess the problem may caused by file format ? there are outline of my code. in android side Socket skt = new Socket(hostIP,port); ParcelFileDescriptor pfd =ParcelFileDescriptor.fromSocket(skt); .... .... mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC); mediaRecorder.setVideoSource(MediaRecorder.VideoSource.DEFAULT); mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4); mediaRecorder.setOutputFile(pfd.getFileDescriptor()); ..... mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.DEFAULT); mediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.MPEG_4_SP); ..... mediaRecorder.start(); in receiver side (my ACER notebook) // anyway , I don't think the file extentions will do any effect File video = new File (strDate+".3gpp"); FileOutputStream fos; try { fos = new FileOutputStream(video); byte[] data = new byte[1024]; int count =-1; while( (count = fin.read(data,0,1024) ) !=-1) { fos.write(data,0,count); fos.flush(); } fos.close(); fin.close(); I confused a long time.... thanks in advance

    Read the article

  • php MySQL snytax error

    - by Jacksta
    my scrip is supposed to look up contacts in a table and present thm on the screen to then be edited. however this is not this case. I am getting the error Parse error: syntax error, unexpected $end in /home/admin/domains/domain.com.au/public_html/pick_modcontact.php on line 50 NOTE: this is the last line in this script. <? session_start(); if ($_SESSION[valid] != "yes") { header( "Location: contact_menu.php"); exit; } $db_name = "testDB"; $table_name = "my_contacts"; $connection = @mysql_connect("localhost", "user", "pass") or die(mysql_error()); $db = @mysql_select_db($db_name, $connection) or die(mysql_error()); $sql = "SELECT id, f_name, l_name FROM $table_name ORDER BY f_name"; $result = @mysql_query($sql, $connection) or die(mysql_error()); $num = @mysql_num_rows($result); if ($num < 1) { $display_block = "<p><em>Sorry No Results!</em></p>"; } else { while ($row = mysql_fetch_array($result)) { $id = $row['id']; $f_name = $row['f_name']; $l_name = $row['l_name']; $option_block .= "<option value\"$id\">$l_name, $f_name</option>"; } $display_block = "<form method=\"POST\" action=\"show_modcontact.php\"> <p><strong>Contact:</strong> <select name=\"id\">$option_block</select> <input type=\"submit\" name=\"submit\" value=\"Select This Contact\"></p> </form>"; ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Modify A Contact</title> </head> <body> <h1>My Contact Management System</h1> <h2><em>Modify a Contact</em></h2> <p>Select a contact from the list below, to modify the contact's record.</p> <? echo "$display_block"; ?> <br> <p><a href="contact_menu.php">Return to Main Menu</a></p> </body> </html>

    Read the article

  • EF4: ObjectContext inconsistent when inserting into a view with triggers

    - by user613567
    I get an Invalid Operation Exception when inserting records in a View that uses “Instead of” triggers in SQL Server with ADO.NET Entity Framework 4. The error message says: {"The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: The key-value pairs that define an EntityKey cannot be null or empty. Parameter name: record"} @ at System.Data.Objects.ObjectContext.SaveChanges(SaveOptions options) at System.Data.Objects.ObjectContext.SaveChanges() In this simplified example I created two tables, Contacts and Employers, and one view Contacts_x_Employers which allows me to insert or retrieve rows into/from these two tables at once. The Tables only have a Name and an ID attributes and the view is based on a join of both: CREATE VIEW [dbo].[Contacts_x_Employers] AS SELECT dbo.Contacts.ContactName, dbo.Employers.EmployerName FROM dbo.Contacts INNER JOIN dbo.Employers ON dbo.Contacts.EmployerID = dbo.Employers.EmployerID And has this trigger: Create TRIGGER C_x_E_Inserts ON Contacts_x_Employers INSTEAD of INSERT AS BEGIN SET NOCOUNT ON; insert into Employers (EmployerName) select i.EmployerName from inserted i where not i.EmployerName in (select EmployerName from Employers) insert into Contacts (ContactName, EmployerID) select i.ContactName, e.EmployerID from inserted i inner join employers e on i.EmployerName = e.EmployerName; END GO The .NET Code follows: using (var Context = new TriggersTestEntities()) { Contacts_x_Employers CE1 = new Contacts_x_Employers(); CE1.ContactName = "J"; CE1.EmployerName = "T"; Contacts_x_Employers CE2 = new Contacts_x_Employers(); CE1.ContactName = "W"; CE1.EmployerName = "C"; Context.Contacts_x_Employers.AddObject(CE1); Context.Contacts_x_Employers.AddObject(CE2); Context.SaveChanges(); //? line with error } SSDL and CSDL (the view nodes): <EntityType Name="Contacts_x_Employers"> <Key> <PropertyRef Name="ContactName" /> <PropertyRef Name="EmployerName" /> </Key> <Property Name="ContactName" Type="varchar" Nullable="false" MaxLength="50" /> <Property Name="EmployerName" Type="varchar" Nullable="false" MaxLength="50" /> </EntityType> <EntityType Name="Contacts_x_Employers"> <Key> <PropertyRef Name="ContactName" /> <PropertyRef Name="EmployerName" /> </Key> <Property Name="ContactName" Type="String" Nullable="false" MaxLength="50" Unicode="false" FixedLength="false" /> <Property Name="EmployerName" Type="String" Nullable="false" MaxLength="50" Unicode="false" FixedLength="false" /> </EntityType> The Visual Studio solution and the SQL Scripts to re-create the whole application can be found in the TestViewTrggers.zip at ftp://JulioSantos.com/files/TriggerBug/. I appreciate any assistance that can be provided. I already spent days working on this problem.

    Read the article

  • CI pagination, POST problem

    - by Gwood
    Okay, I am pretty new in CI and I am stuck on pagination. I am performing this pagination on a record set that is result of a query. Now everything seems to be working fine. But there’s some problem probably with the link. I am displaying 10 results per page. Now if the results are less than 10 then it’s fine. Or If I pull up the entire records in the table it works fine. But in case the result is more than 10 rows, then the first 10 is perfectly displayed, and when I click on the pagination link to get to the next page the next page displays the rest of the results from the query as well as, other records in the table. ??? I am confused.. Any help?? Here’s the model code I am using .... function getTeesLike($field,$param) { $this-db-like($field,$param); $this-db-limit(10, $this-uri-segment(3)); $query=$this-db-get(‘shirt’); if($query-num_rows()0){ return $query-result_array(); } } function getNumTeesfromQ($field,$param) { $this-db-like($field,$param); $query=$this-db-get(‘shirt’); return $query-num_rows(); } And here’s the controller code .... $KW=$this-input-post(‘searchstr’); $this-load-library(‘pagination’); $config[‘base_url’]=‘http://localhost/cit/index.php/tees/show/’; $config[‘total_rows’]=$this-T-getNumTeesfromQ(‘Title’,$KW); $config[‘per_page’]=‘10’; $this-pagination-initialize($config); $data[‘tees’]=$this-T-getTeesLike(‘Title’,$KW); $data[‘title’]=‘Displaying Tees data’; $data[‘header’]=‘Tees List’; $data[‘links’]=$this-pagination-create_links(); $this-load-view(‘tee_res’, $data); //What am I doing wrong here ???? Pls help ... I guess the problem is with the $KW=$this-input-post(‘searchstr’); .. Because if I hard code a value for $KW it works fine. May be I should use POST differently ..but how do I pass the value from the form without POSTING it , its CI so not GET ... ??????

    Read the article

  • Why are my Opteron cores running at only 75% capacity each? (25% CPU idle)

    - by Tim Cooper
    We've just taken delivery of a powerful 32-core AMD Opteron server with 128Gb. We have 2 x 6272 CPU's with 16 cores each. We are running a big long-running java task on 30 threads. We have the NUMA optimisations for Linux and java turned on. Our Java threads are mainly using objects that are private to that thread, sometimes reading memory that other threads will be reading, and very very occasionally writing or locking shared objects. We can't explain why the CPU cores are 25% idle. Below is a dump of "top": top - 23:06:38 up 1 day, 23 min, 3 users, load average: 10.84, 10.27, 9.62 Tasks: 676 total, 1 running, 675 sleeping, 0 stopped, 0 zombie Cpu(s): 64.5%us, 1.3%sy, 0.0%ni, 32.9%id, 1.3%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 132138168k total, 131652664k used, 485504k free, 92340k buffers Swap: 5701624k total, 230252k used, 5471372k free, 13444344k cached ... top - 22:37:39 up 23:54, 3 users, load average: 7.83, 8.70, 9.27 Tasks: 678 total, 1 running, 677 sleeping, 0 stopped, 0 zombie Cpu0 : 75.8%us, 2.0%sy, 0.0%ni, 22.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 77.2%us, 1.3%sy, 0.0%ni, 21.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 77.3%us, 1.0%sy, 0.0%ni, 21.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 77.8%us, 1.0%sy, 0.0%ni, 21.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 76.9%us, 2.0%sy, 0.0%ni, 21.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu5 : 76.3%us, 2.0%sy, 0.0%ni, 21.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 12.6%us, 3.0%sy, 0.0%ni, 84.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 8.6%us, 2.0%sy, 0.0%ni, 89.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu8 : 77.0%us, 2.0%sy, 0.0%ni, 21.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 77.0%us, 2.0%sy, 0.0%ni, 21.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 77.6%us, 1.7%sy, 0.0%ni, 20.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 75.7%us, 2.0%sy, 0.0%ni, 21.4%id, 1.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 76.6%us, 2.3%sy, 0.0%ni, 21.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 76.6%us, 2.3%sy, 0.0%ni, 21.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 76.2%us, 2.6%sy, 0.0%ni, 15.9%id, 5.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 76.6%us, 2.0%sy, 0.0%ni, 21.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu16 : 73.6%us, 2.6%sy, 0.0%ni, 23.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu17 : 74.5%us, 2.3%sy, 0.0%ni, 23.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu18 : 73.9%us, 2.3%sy, 0.0%ni, 23.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu19 : 72.9%us, 2.6%sy, 0.0%ni, 24.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu20 : 72.8%us, 2.6%sy, 0.0%ni, 24.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu21 : 72.7%us, 2.3%sy, 0.0%ni, 25.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu22 : 72.5%us, 2.6%sy, 0.0%ni, 24.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu23 : 73.0%us, 2.3%sy, 0.0%ni, 24.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu24 : 74.7%us, 2.7%sy, 0.0%ni, 22.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu25 : 74.5%us, 2.6%sy, 0.0%ni, 22.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu26 : 73.7%us, 2.0%sy, 0.0%ni, 24.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu27 : 74.1%us, 2.3%sy, 0.0%ni, 23.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu28 : 74.1%us, 2.3%sy, 0.0%ni, 23.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu29 : 74.0%us, 2.0%sy, 0.0%ni, 24.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu30 : 73.2%us, 2.3%sy, 0.0%ni, 24.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu31 : 73.1%us, 2.0%sy, 0.0%ni, 24.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 132138168k total, 131711704k used, 426464k free, 88336k buffers Swap: 5701624k total, 229572k used, 5472052k free, 13745596k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 13865 root 20 0 122g 112g 3.1g S 2334.3 89.6 20726:49 java 27139 jayen 20 0 15428 1728 952 S 2.6 0.0 0:04.21 top 27161 sysadmin 20 0 15428 1712 940 R 1.0 0.0 0:00.28 top 33 root 20 0 0 0 0 S 0.3 0.0 0:06.24 ksoftirqd/7 131 root 20 0 0 0 0 S 0.3 0.0 0:09.52 events/0 1858 root 20 0 0 0 0 S 0.3 0.0 1:35.14 kondemand/0 A dump of the java stack confirms that none of the threads are anywhere near the few places where locks are used, nor are they anywhere near any disk or network i/o. I had trouble finding a clear explanation of what 'top' means by "idle" versus "wait", but I get the impression that "idle" means "no more threads that need to be run" but this doesn't make sense in our case. We're using a "Executors.newFixedThreadPool(30)". There are a large number of tasks pending and each task lasts for 10 seconds or so. I suspect that the explanation requires a good understanding of NUMA. Is the "idle" state what you see when a CPU is waiting for a non-local access? If not, then what is the explanation?

    Read the article

  • transforming binary data using ssis and sql server 2008

    - by Rick
    Hello All - I have a task to import/transform and extract zipped binary files that contain both text data as well as embeded binary data. Within the data is data that is relational in nature and needs to be processed into a defined database structure. Currently I have a C# single threaded app that essentially grabs all the files from the directory (currently there is 13K files of varying sizes) and extracts the data on a single thread line by line inserts to the database. As you could imagine this is a very slow process and unacceptable. There are several different parsing routines used depending on the header record in the file. There are potentially upto a million rows per file when all the data is extracted to the row level of detail. Follow on task is to parse those rows into their appropriate tables based on is content. i.e. the textual content has to be parsed further into "buckets" of like data in the database. That about sums up the big picture. Now for the problem task list. How do i iterate through a packet of data using SSIS? In the app the file is decompressed and then is parsed using streams data type and byte arrays and is routed to the required parsing routine based on the header data of each packet. There is bit swapping involved as well. Should i wrap up the app code into a script task(s) and let it do the custom processing? The data is seperated by year and the sql server tables is partitioned by year as well. I need to be able to "catch" bad file data as well and process by hand most likely. Should i simply load the zipped file to sql as a blob and parse the file with T-SQL? Would that be multi threaded if done that way? Not sure how to do the parsing in tsql that is involved here. Which do you think would be faster? Potentially the data that is currently processed via files could come to us via a socket. Can SSIS collect that data in real time? How would i go about setting that up? Processing these new files from the directorys will become a daily task. I can manage the data once i get it to sql server. Getting it there in a timely fashion seems to be the long pole in the tent for me. I would appreciate any comments or suggestions from the group. Rick

    Read the article

  • USB windows xp final USB access issues

    - by Lex Dean
    I basically understand you C++ people, Please do not get distracted because I'm writing in Delphi. I have a stable USB Listing method that accesses all my USB devices I get the devicepath, and this structure: TSPDevInfoData = packed record Size: DWORD; ClassGuid: TGUID; DevInst: DWORD; // DEVINST handle Reserved: DWord; end; I get my ProductID and VenderID successfully from my DevicePath Lists all USB devices connected to the computer at the time That enables me to access the registry data to each device in a stable way. What I'm lacking is a little direction Is friendly name able to be written inside the connected USB Micro chips by the firmware programmer? (I'm thinking of this to identify the device even further, or is this to help identify Bulk data transfer devices like memory sticks and camera's) Can I use SPDRP_REMOVAL_POLICY_OVERRIDE to some how reset these polices What else can I do with the registry details. Identifying when some one unplugs a device The program is using (in windows XP standard) I used a documented windows event that did not respond. Can I read a registry value to identify if its still connected? using CreateFileA (DevicePath) to send and receive data I have read when some one unplugs in the middle of a data transfer its difficult clearing resources. what can IoCreateDevice do for me and how does one use it for that task This two way point of connection status and system lock up situations is very concerning. Has some one read anything about this subject recently? My objectives are to 1. list connected USB devices identify a in development Micro Controller from everything else send and receive data in a stable and fast way to the limits of the controller No lock up's transferring data Note I'm not using any service packs I understand everything USB is in ANSI when windows xp is not and .Net is all about ANSI (what a waste of memory) I plan to continue this project into a .net at a later date as an addition. MSDN gives me Structures and Functions and what should link to what ok but say little to what they get used for. What is available in my language Delphi is way over priced that it needs a major price drop.

    Read the article

  • dynamic button in jsp

    - by kawtousse
    hi everyone, in my jsp i have a table constructed dynamically like the following: retour.append("<tr>"); try { s= HibernateUtil.currentSession(); tx=s.beginTransaction(); Query query = s.createQuery(HQL_QUERY); for(Iterator it=query.iterate();it.hasNext();) { if(it.hasNext()){ Dailytimesheet object=(Dailytimesheet)it.next(); retour.append("<td>" +object.getActivity() +"</td>"); retour.append("<td>" +object.getProjectCode() + "</td>"); retour.append("<td>" +object.getWAName() + "</td>"); retour.append("<td>" +object.getTaskCode() +"</td>"); retour.append("<td>" +object.getTimeFrom() +"</td>"); retour.append("<td>" +object.getTimeSpent() + "</td>"); retour.append("<td>" +object.getPercentTaskComplete() + "</td>"); if (droitdaccess) { retour.append(""); retour.append(""); retour.append(""); retour.append("<td bordercolor=#FFFFFF>"); retour.append("<input type=\"hidden\" id=\"id_"+nomTab+"_"+compteur+"\" value=\""+object.getIdDailyTimeSheet()+"\"/>"); retour.append("<img src=\"icon_delete.gif\" onClick=\"deletearowById('id_"+nomTab+"_"+compteur+"')\" style=\"cursor:pointer\" name=\"action\" value=\"deleting\" />"); retour.append("</td>"); } } compteur++; retour.append("</tr>"); } //terminer la table. retour.append ("</table>"); next to the table i want to display a button named send in order to send the table content. I do not really want to dispaly this button where the table is empty. So at least if the table is populated by nly one record i want that button being displayed. How should i deal in this case. Thanks.

    Read the article

  • Function Point Analysis -- a seriously overestimating technique?

    - by kizzx2
    I know questions about FPA has been asked numerous times before, but this time I'm taking a more analytical angle at it, backed up with data. 1. First, some data This question is based on a tutorial. He had a "Sample Count" section where he demonstrated it step by step. You can see some screenshots of his sample application here. In the end, he calculated the unadjusted FP to be 99. There is another article on InformIT with industry data on typical hour/FP. It ranges from 2 hours/FP to 27.4 hours/FP. Let's try to stick with 2 for the moment (since SO readers are probably the more efficient crowd :p). 2. Reality check!? Now just check out the screenshots again. Do a little math here 99 * 2 = 198 hours 198 hours / 40 hours per week = 5 weeks Seriously? That sample application is going to take 5 weeks to implement? Is it just my feeling that it wouldn't take any decent programmer longer than one week (I"m not even saying weekend) to have it completed? Now let's try estimating the cost of the project. We'll use New York's minimum wage at the moment (Wikipedia), which is $7.25 198 * 7.25 = $1435.5 From what I could see from the screenshots, this application is a small excel-improvement app. I could have bought MS Office Pro for 200 bucks which gives me greater interoperability (.xls files) and flexibility (spreadsheets). (For the record, that same Web site has another article discussing productivity. It seems like they typically use 4.2 hours/FP, which gives us even more shocking stats: 99 * 4.2 = 415 hours = 10 weeks = almost 3 whopping months! 415 hours * $7.25 = $3000 zomg (That's even assuming that all our poor coders get the minimum wage!) 3. Am I missing something here? Right now, I could come up with several possible explanation: FPA is really only suited for bigger projects (1000+ FPs) so it becomes extremely inaccurate at smaller scale. The hours/FP metric fluctuates abruptly from team to team, project to project. For a small project like this, we could have used something like 0.5 hour/FP or something. (Now this kind of makes the whole estimation thing pointless, unless my firm does the same type of projects for several years with the same team, not really common.) From my experience with several software metrics, Function Point is really not a lightweight metric. If the hour/FP thing fluctuates so much, then what's the point, maybe I could have gone with User Story Points which is a lot faster to get and arguably almost as uncertain. What would be the FP experts' answers to this?

    Read the article

  • XCode sqlite3 - SELECT always return SQLITE_DONE

    - by user573633
    Hi developers.... a noob here asking for help after a day of head-banging.... I am working on an app with sqlite3 database with one database and two tables. I have now come to a step where I want to select from the table with an argument. The code is here: -(NSMutableArray*) getGroupsPeopleWhoseGroupName:(NSString*)gn;{ NSMutableArray *groupedPeopleArray = [[NSMutableArray alloc] init]; const char *sql = "SELECT * FROM Contacts WHERE groupName='?'"; @try { NSArray * paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES); NSString *docsDir = [paths objectAtIndex:0]; NSString *theDBPath = [docsDir stringByAppendingPathComponent:@"ContactBook.sqlite"]; if (!(sqlite3_open([theDBPath UTF8String], &database) == SQLITE_OK)) { NSLog(@"An error opening database."); } sqlite3_stmt *st; NSLog(@"debug004 - sqlite3_stmt success."); if (sqlite3_prepare_v2(database, sql, -1, &st, NULL) != SQLITE_OK) { NSLog(@"Error, failed to prepare statement."); } //DB is ready for accessing, now start getting all the info. while (sqlite3_step(st) == SQLITE_ROW) { MyContacts * aContact = [[MyContacts alloc] init]; //get contactID from DB. aContact.contactID = sqlite3_column_int(st, 0); if (sqlite3_column_text(st, 1) != NULL) { aContact.firstName = [NSString stringWithUTF8String:(char *) sqlite3_column_text(st, 1)]; } else { aContact.firstName = @""; } // here retrieve other columns data .... //store these info retrieved into the newly created array. [groupedPeopleArray addObject:aContact]; [aContact release]; } if(sqlite3_finalize(st) != SQLITE_OK) { NSLog(@"Failed to finalize data statement."); } if (sqlite3_close(database) != SQLITE_OK) { NSLog(@"Failed to close database."); } } @catch (NSException *e) { NSLog(@"An exception occurred: %@", [e reason]); return nil; } return groupedPeopleArray;} MyContacts is the class where I put up all the record variables. My problem is sqlite3_step(st) always return SQLITE_DONE, so that it i can never get myContacts. (i verified this by checking the return value). What am I doing wrong here? Many thanks in advance!

    Read the article

  • UTF-8 GET using Indy 10.5.8.0 and Delphi XE2

    - by Bogdan Botezatu
    I'm writing my first Unicode application with Delphi XE2 and I've stumbled upon an issue with GET requests to an Unicode URL. Shortly put, it's a routine in a MP3 tagging application that takes a track title and an artist and queries Last.FM for the corresponding album, track no and genre. I have the following code: function GetMP3Info(artist, track: string) : TMP3Data //<---(This is a record) var TrackTitle, ArtistTitle : WideString; webquery : WideString; [....] WebQuery := UTF8Encode('http://ws.audioscrobbler.com/2.0/?method=track.getcorrection&api_key=' + apikey + '&artist=' + artist + '&track=' + track); //[processing the result in the web query, getting the correction for the artist and title] // eg: for artist := Bucovina and track := Mestecanis, the corrected values are //ArtistTitle := Bucovina; // TrackTitle := Mestecani?; //Now here is the tricky part: webquery := UTF8Encode('http://ws.audioscrobbler.com/2.0/?method=track.getInfo&api_key=' + apikey + '&artist=' + unescape(ArtistTitle) + '&track=' + unescape(TrackTitle)); //the unescape function replaces spaces (' ') with '+' to comply with the last.fm requests [some more processing] end; The webquery looks in a TMemo just right (http://ws.audioscrobbler.com/2.0/?method=track.getInfo&api_key=e5565002840xxxxxxxxxxxxxx23b98ad&artist=Bucovina&track=Mestecani?) Yet, when I try to send a GET() to the webquery using IdHTTP (with the ContentEncoding property set to 'UTF-8'), I see in Wireshark that the component is GET-ing the data to the ANSI value '/2.0/?method=track.getInfo&api_key=e5565002840xxxxxxxxxxxxxx23b98ad&artist=Bucovina&track=Mestec?ni?' Here is the full headers for the GET requests and responses: GET /2.0/?method=track.getInfo&api_key=e5565002840xxxxxxxxxxxxxx23b98ad&artist=Bucovina&track=Mestec?ni? HTTP/1.1 Content-Encoding: UTF-8 Host: ws.audioscrobbler.com Accept: text/html, */* Accept-Encoding: identity User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.23) Gecko/20110920 Firefox/3.6.23 SearchToolbar/1.22011-10-16 20:20:07 HTTP/1.0 400 Bad Request Date: Tue, 09 Oct 2012 20:46:31 GMT Server: Apache/2.2.22 (Unix) X-Web-Node: www204 Access-Control-Allow-Origin: * Access-Control-Allow-Methods: POST, GET, OPTIONS Access-Control-Max-Age: 86400 Cache-Control: max-age=10 Expires: Tue, 09 Oct 2012 20:46:42 GMT Content-Length: 114 Connection: close Content-Type: text/xml; charset=utf-8; <?xml version="1.0" encoding="utf-8"?> <lfm status="failed"> <error code="6"> Track not found </error> </lfm> The question that puzzles me is am I overseeing anything related to setting the property of the tidhttp control? How can I stop the well-formated URL i'm composing in the application from getting wrongfully sent to the server? Thanks.

    Read the article

  • What is the most elegant way to implement a business rule relating to a child collection in LINQ?

    - by AaronSieb
    I have two tables in my database: Wiki WikiId ... WikiUser WikiUserId (PK) WikiId UserId IsOwner ... These tables have a one (Wiki) to Many (WikiUser) relationship. How would I implement the following business rule in my LINQ entity classes: "A Wiki must have exactly one owner?" I've tried updating the tables as follows: Wiki WikiId (PK) OwnerId (FK to WikiUser) ... WikiUser WikiUserId (PK) WikiId UserId ... This enforces the constraint, but if I remove the owner's WikiUser record from the Wiki's WikiUser collection, I recieve an ugly SqlException. This seems like it would be difficult to catch and handle in the UI. Is there a way to perform this check before the SqlException is generated? A better way to structure my database? A way to catch and translate the SqlException to something more useful? Edit: I would prefer to keep the validation rules within the LINQ entity classes if possible. Edit 2: Some more details about my specific situation. In my application, the user should be able to remove users from the Wiki. They should be able to remove any user, except the user who is currently flagged as the "owner" of the Wiki (a Wiki must have exactly one owner at all times). In my control logic, I'd like to use something like this: wiki.WikiUsers.Remove(wikiUser); mRepository.Save(); And have any broken rules transferred to the UI layer. What I DON'T want to have to do is this: if(wikiUser.WikiUserId != wiki.OwnerId) { wiki.WikiUsers.Remove(wikiUser); mRepository.Save(); } else { //Handle errors. } I also don't particularly want to move the code to my repository (because there is nothing to indicate not to use the native Remove functions), so I also DON'T want code like this: mRepository.RemoveWikiUser(wiki, wikiUser) mRepository.Save(); This WOULD be acceptable: try { wiki.WikiUsers.Remove(wikiUser); mRepository.Save(); } catch(ValidationException ve) { //Display ve.Message } But this catches too many errors: try { wiki.WikiUsers.Remove(wikiUser); mRepository.Save(); } catch(SqlException se) { //Display se.Message } I would also PREFER NOT to explicitly call a business rule check (although it may become necessary): wiki.WIkiUsers.Remove(wikiUser); if(wiki.CheckRules()) { mRepository.Save(); } else { //Display broken rules }

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • How Do I Prevent Rails From Treating Edit Fields_For Differently From New Fields_For

    - by James
    I am using rails3 beta3 and couchdb via couchrest. I am not using active record. I want to add multiple "Sections" to a "Guide" and add and remove sections dynamically via a little javascript. I have looked at all the screencasts by Ryan Bates and they have helped immensely. The only difference is that I want to save all the sections as an array of sections instead of individual sections. Basically like this: "sections" => [{"title" => "Foo1", "content" => "Bar1"}, {"title" => "Foo2", "content" => "Bar2"}] So, basically I need the params hash to look like that when the form is submitted. When I create my form I am doing the following: <%= form_for @guide, :url => { :action => "create" } do |f| %> <%= render :partial => 'section', :collection => @guide.sections %> <%= f.submit "Save" %> <% end %> And my section partial looks like this: <%= fields_for "sections[]", section do |guide_section_form| %> <%= guide_section_form.text_field :section_title %> <%= guide_section_form.text_area :content, :rows => 3 %> <% end %> Ok, so when I create the guide with sections, it is working perfectly as I would like. The params hash is giving me a sections array just like I would want. The problem comes when I want edit guide/sections and save them again because rails is inserting the id of the guide in the id and name of each form field, which is screwing up the params hash on form submission. Just to be clear, here is the raw form output for a new resource: <input type="text" size="30" name="sections[][section_title]" id="sections__section_title"> <textarea rows="3" name="sections[][content]" id="sections__content" cols="40"></textarea> And here is what it looks like when editing an existing resource: <input type="text" value="Foo1" size="30" name="sections[cd2f2759895b5ae6cb7946def0b321f1][section_title]" id="sections_cd2f2759895b5ae6cb7946def0b321f1_section_title"> <textarea rows="3" name="sections[cd2f2759895b5ae6cb7946def0b321f1][content]" id="sections_cd2f2759895b5ae6cb7946def0b321f1_content" cols="40">Bar1</textarea> How do I force rails to always use the new resource behavior and not automatically add the id to the name and value. Do I have to create a custom form builder? Is there some other trick I can do to prevent rails from putting the id of the guide in there? I have tried a bunch of stuff and nothing is working. Thanks in advance!

    Read the article

  • Large number of UPDATE queries slowing down page

    - by Bryan Lewis
    I am reading and validating large fixed-width text files (range from 10-50K lines) that are submitted via our ASP.net website (coded in VB.Net). I do an initial scan of the file to check for basic issues (line length, etc). Then I import each row into a MS SQL table. Each DB rows basically consists of a record_ID (Primary, auto-incrementing) and about 50 varchar fields. After the insert is done, I run a validation function on the file that checks each field in each row based on a bunch of criteria (trimmed length, isnumeric, range checks, etc). If it finds an error in any field, it inserts a record into the Errors table, which has an error_ID, the record_ID and an error message. In addition, if the field fails in a particular way, I have to do a "reset" on that field. A reset might consist of blanking the entire field, or simply replacing the value with another value (e.g. replacing the string with a new one that has all illegals chars taken out). I have a 5,000 line test file. The upload, initial check, and import takes about 5-6 seconds. The detailed error check and insert into the Errors table takes about 5-8 seconds (this file has about 1200 errors in it). However, the "resets" part takes about 40-45 seconds for 750 fields that need to be reset. When I comment out the resets function (returning immediately without actually calling the UPDATE stored proc), the process is very fast. With the resets turned on, the pages take 50 seconds to return. My UPDATE stored proc is using some recommended code from http://sommarskog.se/dynamic_sql.html, whereby it uses CASE instead of dynamic SQL: UPDATE dbo.Records SET dbo.Records.file_ID = CASE @field_name WHEN 'file_ID' THEN @field_value ELSE file_ID END, . . (all 50 varchar field CASE statements here) . WHERE dbo.Records.record_ID = @record_ID Is there any way I can help my performance here. Can I somehow group all of these UPDATE calls into a single transaction? Should I be reworking the UPDATE query somehow? Or is it just sheer quantity of 750+ UPDATEs and things are just slow (it's a quad proc server with 8GB ram). Any suggestions appreciated.

    Read the article

  • closing MQ connection

    - by OakvilleWork
    Good afternoon, I wrote a project to Get Park Queue Info from the IBM MQ, it has producing an error when attempting to close the connection though. It is written in java. Under application in Event Viewer on the MQ machine it displays two errors. They are: “Channel program ended abnormally. Channel program ‘system.def.surconn’ ended abnormally. Look at previous error messages for channel program ‘system.def.surconn’ in the error files to determine the cause of the failure. The other message states: “Error on receive from host rnanaj (10.10.12.34) An error occurred receiving data from rnanaj (10.10.12.34) over tcp/ip. This may be due to a communications failure. The return code from tcp/ip recv() call was 10054 (X’2746’). Record these values.” This must be something how I try to connect or close the connection, below I have my code to connect and close, any ideas?? Connect: _logger.info("Start"); File outputFile = new File(System.getProperty("PROJECT_HOME"), "run/" + this.getClass().getSimpleName() + "." + System.getProperty("qmgr") + ".txt"); FileUtils.mkdirs(outputFile.getParentFile()); Connection jmsConn = null; Session jmsSession = null; QueueBrowser queueBrowser = null; BufferedWriter commandsBw = null; try { // get queue connection MQConnectionFactory MQConn = new MQConnectionFactory(); MQConn.setHostName(System.getProperty("host")); MQConn.setPort(Integer.valueOf(System.getProperty("port"))); MQConn.setQueueManager(System.getProperty("qmgr")); MQConn.setChannel("SYSTEM.DEF.SVRCONN"); MQConn.setTransportType(JMSC.MQJMS_TP_CLIENT_MQ_TCPIP); jmsConn = (Connection) MQConn.createConnection(); jmsSession = jmsConn.createSession(false, Session.AUTO_ACKNOWLEDGE); Queue jmsQueue = jmsSession.createQueue("PARK"); // browse thru messages queueBrowser = jmsSession.createBrowser(jmsQueue); Enumeration msgEnum = queueBrowser.getEnumeration(); commandsBw = new BufferedWriter(new FileWriter(outputFile)); // String line = "DateTime\tMsgID\tOrigMsgID\tCorrelationID\tComputerName\tSubsystem\tDispatcherName\tProcessor\tJobID\tErrorMsg"; commandsBw.write(line); commandsBw.newLine(); while (msgEnum.hasMoreElements()) { Message message = (Message) msgEnum.nextElement(); line = dateFormatter.format(new Date(message.getJMSTimestamp())) + "\t" + message.getJMSMessageID() + "\t" + message.getStringProperty("pkd_orig_jms_msg_id") + "\t" + message.getJMSCorrelationID() + "\t" + message.getStringProperty("pkd_computer_name") + "\t" + message.getStringProperty("pkd_subsystem") + "\t" + message.getStringProperty("pkd_dispatcher_name") + "\t" + message.getStringProperty("pkd_processor") + "\t" + message.getStringProperty("pkd_job_id") + "\t" + message.getStringProperty("pkd_sysex_msg"); _logger.info(line); commandsBw.write(line); commandsBw.newLine(); } } Close: finally { IO.close(commandsBw); if (queueBrowser != null) { try { queueBrowser.close();} catch (Exception ignore) {}} if (jmsSession != null) { try { jmsSession.close();} catch (Exception ignore) {}} if (jmsConn != null) { try { jmsConn.stop();} catch (Exception ignore) {}} }

    Read the article

  • Smoke testing a .NET web application

    - by pdr
    I cannot believe I'm the first person to go through this thought process, so I'm wondering if anyone can help me out with it. Current situation: developers write a web site, operations deploy it. Once deployed, a developer Smoke Tests it, to make sure the deployment went smoothly. To me this feels wrong, it essentially means it takes two people to deploy an application; in our case those two people are on opposite sides of the planet and timezones come into play, causing havoc. But the fact remains that developers know what the minimum set of tests is and that may change over time (particularly for the web service portion of our app). Operations, with all due respect to them (and they would say this themselves), are button-pushers who need a set of instructions to follow. The manual solution is that we document the test cases and operations follow that document each time they deploy. That sounds painful, plus they may be deploying different versions to different environments (specifically UAT and Production) and may need a different set of instructions for each. On top of this, one of our near-future plans is to have an automated daily deploy environment, so then we'll have to instruct a computer as to how to deploy a given version of our app. I would dearly like to add to that instructions for how to smoke test the app. Now developers are better at documenting instructions for computers than they are for people, so the obvious solution seems to be to use a combination of nUnit (I know these aren't unit tests per se, but it is a built-for-purpose test runner) and either the Watin or Selenium APIs to run through the obvious browser steps and call to the web service and explain to the Operations guys how to run those unit tests. I can do that; I have mostly done it already. But wouldn't it be nice if I could make that process simpler still? At this point, the Operations guys and the computer are going to have to know which set of tests relate to which version of the app and tell the nUnit runner which base URL it should point to (say, www.example.com = v3.2 or test.example.com = v3.3). Wouldn't it be nicer if the test runner itself had a way of giving it a base URL and letting it download say a zip file, unpack it and edit a configuration file automatically before running any test fixtures it found in there? Is there an open source app that would do that? Is there a need for one? Is there a solution using something other than nUnit, maybe Fitnesse? For the record, I'm looking at .NET-based tools first because most of the developers are primarily .NET developers, but we're not married to it. If such a tool exists using other languages to write the tests, we'll happily adapt, as long as there is a test runner that works on Windows.

    Read the article

  • What are the Options for Storing Hierarchical Data in a Relational Database?

    - by orangepips
    Good Overviews One more Nested Intervals vs. Adjacency List comparison: the best comparison of Adjacency List, Materialized Path, Nested Set and Nested Interval I've found. Models for hierarchical data: slides with good explanations of tradeoffs and example usage Representing hierarchies in MySQL: very good overview of Nested Set in particular Hierarchical data in RDBMSs: most comprehensive and well organized set of links I've seen, but not much in the way on explanation Options Ones I am aware of and general features: Adjacency List: Columns: ID, ParentID Easy to implement. Cheap node moves, inserts, and deletes. Expensive to find level (can store as a computed column), ancestry & descendants (Bridge Hierarchy combined with level column can solve), path (Lineage Column can solve). Use Common Table Expressions in those databases that support them to traverse. Nested Set (a.k.a Modified Preorder Tree Traversal) First described by Joe Celko - covered in depth in his book Trees and Hierarchies in SQL for Smarties Columns: Left, Right Cheap level, ancestry, descendants Compared to Adjacency List, moves, inserts, deletes more expensive. Requires a specific sort order (e.g. created). So sorting all descendants in a different order requires additional work. Nested Intervals Combination of Nested Sets and Materialized Path where left/right columns are floating point decimals instead of integers and encode the path information. Bridge Table (a.k.a. Closure Table: some good ideas about how to use triggers for maintaining this approach) Columns: ancestor, descendant Stands apart from table it describes. Can include some nodes in more than one hierarchy. Cheap ancestry and descendants (albeit not in what order) For complete knowledge of a hierarchy needs to be combined with another option. Flat Table A modification of the Adjacency List that adds a Level and Rank (e.g. ordering) column to each record. Expensive move and delete Cheap ancestry and descendants Good Use: threaded discussion - forums / blog comments Lineage Column (a.k.a. Materialized Path, Path Enumeration) Column: lineage (e.g. /parent/child/grandchild/etc...) Limit to how deep the hierarchy can be. Descendants cheap (e.g. LEFT(lineage, #) = '/enumerated/path') Ancestry tricky (database specific queries) Database Specific Notes MySQL Use session variables for Adjacency List Oracle Use CONNECT BY to traverse Adjacency Lists PostgreSQL ltree datatype for Materialized Path SQL Server General summary 2008 offers HierarchyId data type appears to help with Lineage Column approach and expand the depth that can be represented.

    Read the article

  • WPF Reusing Xaml Effectively

    - by Steve
    Hi, I've recently been working on a project using WPF to produce a diagram. In this I must show text alongside symbols that illustrate information associated with the text. To draw the symbols I initially used some png images I had produced. Within my diagram these images appeared blurry and only looked worse when zoomed in on. To improve on this I decided I would use a vector rather than a rastor image format. Below is the method I used to get the rastor image from a file path: protected Image GetSymbolImage(string symbolPath, int symbolHeight) { Image symbol = new Image(); symbol.Height = symbolHeight; BitmapImage bitmapImage = new BitmapImage(); bitmapImage.BeginInit(); bitmapImage.UriSource = new Uri(symbolPath); bitmapImage.DecodePixelHeight = symbolHeight; bitmapImage.EndInit(); symbol.Source = bitmapImage; return symbol; } Unfortunately this does not recognise vector image formats. So instead I used a method like the following, where "path" is the file path to a vector image of the format .xaml: public static Canvas LoadXamlCanvas(string path) { //if a file exists at the specified path if (File.Exists(path)) { //store the text in the file string text = File.ReadAllText(path); //produce a canvas from the text StringReader stringReader = new StringReader(text); XmlReader xmlReader = XmlReader.Create(stringReader); Canvas c = (Canvas)XamlReader.Load(xmlReader); //return the canvas return c; } return null; } This worked but drastically killed performance when called repeatedly. I found the logic necessary for text to canvas conversion (see above) was the main cause of the performance problem therefore embedding the .xaml images would not alone resolve the performance issue. I tried using this method only on the initial load of my application and storing the resulting canvases in a dictionary that could later be accessed much quicker but I later realised when using the canvases within the dictionary I would have to make copies of them. All the logic I found online associated with making copies used a XamlWriter and XamlReader which would again just introduce a performance problem. The solution I used was to copy the contents of each .xaml image into its own user control and then make use of these user controls where appropriate. This means I now display vector graphics and performance is much better. However this solution to me seems pretty clumsy. I'm new to WPF and wonder if there is some built in way of storing and reusing xaml throughout an application? Apologies for the length of this question. I thought having a record of my attempts might help someone with any similar problem. Thanks.

    Read the article

  • how do i scroll through 100 photos in UIScrollView in IPhone

    - by mwangima
    I'm trying to scroll through images being downloaded from a users online album (like in the facebook iphone app) since i can't load all images into memory, i'm loading 3 at a time (prev,current & next). then removing image(prev-1) & image (next +1) from the uiscroller subviews. my logic works fine in the simulator but fails in the device with this error: [CALayer retain]: message sent to deallocated instance what could be the problem below is my code sample - (void)scrollViewDidEndDecelerating:(UIScrollView *)_scrollView { pageControlIsChangingPage = NO; CGFloat pageWidth = _scrollView.frame.size.width; int page = floor((_scrollView.contentOffset.x - pageWidth / 2) / pageWidth) + 1; if (page1 && page<=(pageControl.numberOfPages-3)) { [self removeThisView:(page-2)]; [self removeThisView:(page+2)]; } if(page0) { NSLog(@"<< PREVIOUS"); [self showPhoto:(page-1)]; } [self showPhoto:page]; if(page<(pageControl.numberOfPages-1)) { //NSLog(@"NEXT "); [self showPhoto:page+1]; NSLog(@"FINISHED LOADING NEXT "); } } -(void) showPhoto:(NSInteger)index { CGFloat cx = scrollView.frame.size.width*index; CGFloat cy = 40; CGRect rect=CGRectMake( 0, 0,320, 480); rect.origin.x = cx; rect.origin.y = cy; AsyncImageView* asyncImage = [[AsyncImageView alloc] initWithFrame:rect]; asyncImage.tag = 999; NSURL *url = [NSURL URLWithString:[pics objectAtIndex:index]]; [asyncImage loadImageFromURL:url place:CGRectMake(150, 190, 30, 30) member:memberid isSlide:@"Yes" picId:[picsIds objectAtIndex:index]]; [scrollView addSubview:asyncImage]; [asyncImage release]; } -(void) removeThisView:(NSInteger)index { if(index<[[scrollView subviews] count] && [[scrollView subviews] objectAtIndex:index]!=nil){ if ([[[scrollView subviews] objectAtIndex:index] isKindOfClass:[AsyncImageView class]] || [[[scrollView subviews] objectAtIndex:index] isKindOfClass:[UIImageView class]]) { [[[scrollView subviews] objectAtIndex:index] removeFromSuperview]; } } } For the record it works OK in the simulator, but not the iphone device itself. any ideas will be appreciated. cheers, fred.

    Read the article

  • NHibernate unintentional lazy property loading

    - by chiccodoro
    I introduced a mapping for a business object which has (among others) a property called "Name": public class Foo : BusinessObjectBase { ... public virtual string Name { get; set; } } For some reason, when I fetch "Foo" objects, NHibernate seems to apply lazy property loading (for simple properties, not associations): The following code piece generates n+1 SQL statements, whereof the first only fetches the ids, and the remaining n fetch the Name for each record: ISession session = ...IQuery query = session.CreateQuery(queryString); ITransaction tx = session.BeginTransaction(); List<Foo> result = new List<Foo>(); foreach (Foo foo in query.Enumerable()) { result.Add(foo); } tx.Commit(); session.Close(); produces: NHibernate: select foo0_.FOO_ID as col_0_0_ from V1_FOO foo0_<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 81<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36470<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36473 Similarly, the following code leads to a LazyLoadingException after session is closed: ISession session = ... ITransaction tx = session.BeginTransaction(); Foo result = session.Load<Foo>(id); tx.Commit(); session.Close(); Console.WriteLine(result.Name); Following this post, "lazy properties ... is rarely an important feature to enable ... (and) in Hibernate 3, is disabled by default." So what am I doing wrong? I managed to work around the LazyLoadingException by doing a NHibernateUtil.Initialize(foo) but the even worse part are the n+1 sql statements which bring my application to its knees. This is how the mapping looks like: <class name="Foo" table="V1_FOO"> ... <property name="Name" column="NAME"/> </class> BTW: The abstract "BusinessObjectBase" base class encapsulates the ID property which serves as the internal identifier.

    Read the article

  • CSS Collapsing/Hiding divs with no data in <span>

    - by Chance
    I am trying to display an address which includes the following information: Title, division, address1, address2, town/state/zip, and country (5 seperate lines worth of data). The problem is sometimes the company may only have the title, address1, and town/state/zip yet other times it may be all but address2. This is determined upon a db record request server side. Therefore how can I make my output look proper when some of my labels will be blank? I would like div's that contain an empty span to be essentially collapsed/removed. My only idea of how was to use jquery and a selector to find all divs with blank spans (since thats all an asp.net label really is) and then remove those divs however this seems like such bad form. Is there any way to do this with css? Possible Code would be something like: $('span:empty:only-child').parent('div').remove(); Picture Examples (Ignore spacing/indentation issues which I will fix) Missing Division, Address2, and Country All Possible Fields The Html <asp:Label runat="server" ID="lblBillingAddressHeader" CssClass="lblBillingAddressHeader" Text="Billing Address:" /> <div style="position:relative; top:150px; left: 113px;"> <div class="test"> <asp:Label runat="server" ID="lblBillingDivision" CssClass="lblBillingShippingDivisionFont" /> </div> <div class="test"> <asp:Label runat="server" ID="lblBillingAddress" CssClass="lblBillingShippingFont" /> </div> <div class="test"> <asp:Label runat="server" ID="lblBillingAddress2" CssClass="lblBillingShippingFont" /> </div> <div class="test"> <asp:Label runat="server" ID="lblBillingAddress3" CssClass="lblBillingShippingFont" /> </div> <div class="test"> <asp:Label runat="server" ID="lblBillingAddress4" CssClass="lblBillingShippingFont" /> </div> </div> The CSS .test { position: relative; top: 0px; left: 0px; height: 12px; width: 300px; } .lblBillingShippingDivisionFont { font-size: small; font-weight: bold; } .lblBillingShippingFont { font-size: 10.6px; }

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >