Search Results

Search found 14780 results on 592 pages for 'low level'.

Page 545/592 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • LDAP over SSL with an EFI Fiery printer

    - by austinian
    I've got a printer with a Fiery running 8e Release 2. I can authenticate users against AD using the LDAP configuration, but I can only get it to work if I don't use SSL/TLS, and only if I use SIMPLE authentication. Right now, it's authenticating using a fairly low-impact user, but it's also the only system on our network that's not using LDAPS. I can get AD info fine over LDAPS using ldp.exe from my machine, our firewall, our mail filter, our linux boxes, etc. The only problem child is the Fiery. I've added the LDAP server certificate as a trusted cert to the Fiery, but after I check the box for Secure Communication and change the port to 636, pressing Validate results in a dialog box coming up saying: LDAP Validation Failed Server Name invalid or server is unavailable. I've tried changing the server name to use just the name, the FQDN, and the IP address, and changed it to another server, just to see if it was just this AD server that was fussy with the Fiery. EDIT: removed LDP output, added packet capture analysis from wireshark: The conversation seems pretty normal to me, up to the point where the Fiery terminates the connection after the server sends back a handshake response. Maybe they messed up their TLS implementation? I'm trying support, but it's been fairly useless so far. The cert is a SHA-2 (sha256RSA) 2048-bit certificate. Also, it looks like the Fiery is specifying TLS 1.0. Looking at http://msdn.microsoft.com/en-us/library/windows/desktop/aa374757(v=vs.85).aspx, I'm not seeing SHA256 and TLS 1.0 combination being supported by SChannel. headdesk perhaps that's why, after the DC changes the cipher spec, the connection is terminated by the Fiery? TLS 1.1 and 1.2 are enabled on the DC. Wireshark conversation: DC: 172.17.2.22, Fiery: 172.17.2.42 No. Time Source Source Port Destination Destination Port Protocol Length Info 1 0.000000000 172.17.2.42 48633 172.17.2.22 ldaps TCP 74 48633 > ldaps [SYN] Seq=0 Win=5840 Len=0 MSS=1460 SACK_PERM=1 TSval=3101761 TSecr=0 WS=4 2 0.000182000 Dell_5e:94:e3 Broadcast ARP 60 Who has 172.17.2.42? Tell 172.17.2.22 3 0.000369000 TyanComp_c9:0f:90 Dell_5e:94:e3 ARP 60 172.17.2.42 is at 00:e0:81:c9:0f:90 4 0.000370000 172.17.2.22 ldaps 172.17.2.42 48633 TCP 74 ldaps > 48633 [SYN, ACK] Seq=0 Ack=1 Win=8192 Len=0 MSS=1460 WS=256 SACK_PERM=1 TSval=67970573 TSecr=3101761 5 0.000548000 172.17.2.42 48633 172.17.2.22 ldaps TCP 66 48633 > ldaps [ACK] Seq=1 Ack=1 Win=5840 Len=0 TSval=3101761 TSecr=67970573 6 0.001000000 172.17.2.42 48633 172.17.2.22 ldaps TLSv1 147 Client Hello 7 0.001326000 172.17.2.22 ldaps 172.17.2.42 48633 TCP 1514 [TCP segment of a reassembled PDU] 8 0.001513000 172.17.2.22 ldaps 172.17.2.42 48633 TCP 1514 [TCP segment of a reassembled PDU] 9 0.001515000 172.17.2.42 48633 172.17.2.22 ldaps TCP 66 48633 > ldaps [ACK] Seq=82 Ack=1449 Win=8736 Len=0 TSval=3101761 TSecr=67970573 10 0.001516000 172.17.2.42 48633 172.17.2.22 ldaps TCP 66 48633 > ldaps [ACK] Seq=82 Ack=2897 Win=11632 Len=0 TSval=3101761 TSecr=67970573 11 0.001732000 172.17.2.22 ldaps 172.17.2.42 48633 TCP 1514 [TCP segment of a reassembled PDU] 12 0.001737000 172.17.2.22 ldaps 172.17.2.42 48633 TLSv1 1243 Server Hello, Certificate, Certificate Request, Server Hello Done 13 0.001738000 172.17.2.42 48633 172.17.2.22 ldaps TCP 66 48633 > ldaps [ACK] Seq=82 Ack=4345 Win=14528 Len=0 TSval=3101761 TSecr=67970573 14 0.001739000 172.17.2.42 48633 172.17.2.22 ldaps TCP 66 48633 > ldaps [ACK] Seq=82 Ack=5522 Win=17424 Len=0 TSval=3101761 TSecr=67970573 15 0.002906000 172.17.2.42 48633 172.17.2.22 ldaps TLSv1 78 Certificate 16 0.004155000 172.17.2.42 48633 172.17.2.22 ldaps TLSv1 333 Client Key Exchange 17 0.004338000 172.17.2.22 ldaps 172.17.2.42 48633 TCP 66 ldaps > 48633 [ACK] Seq=5522 Ack=361 Win=66304 Len=0 TSval=67970573 TSecr=3101762 18 0.004338000 172.17.2.42 48633 172.17.2.22 ldaps TLSv1 72 Change Cipher Spec 19 0.005481000 172.17.2.42 48633 172.17.2.22 ldaps TLSv1 327 Encrypted Handshake Message 20 0.005645000 172.17.2.22 ldaps 172.17.2.42 48633 TCP 66 ldaps > 48633 [ACK] Seq=5522 Ack=628 Win=66048 Len=0 TSval=67970574 TSecr=3101762 21 0.010247000 172.17.2.22 ldaps 172.17.2.42 48633 TLSv1 125 Change Cipher Spec, Encrypted Handshake Message 22 0.016451000 172.17.2.42 48633 172.17.2.22 ldaps TCP 66 48633 > ldaps [FIN, ACK] Seq=628 Ack=5581 Win=17424 Len=0 TSval=3101765 TSecr=67970574 23 0.016630000 172.17.2.22 ldaps 172.17.2.42 48633 TCP 66 ldaps > 48633 [ACK] Seq=5581 Ack=629 Win=66048 Len=0 TSval=67970575 TSecr=3101765 24 0.016811000 172.17.2.22 ldaps 172.17.2.42 48633 TCP 60 ldaps > 48633 [RST, ACK] Seq=5581 Ack=629 Win=0 Len=0

    Read the article

  • Google Maps Rollover Problem in a Flex Website

    - by Laxmidi
    Hi, I'm using Google Maps in my Flex site to create a map. I've got polygons overlayed on the map. When the user rolls over a polygon an infowindow opens identifying the area and the fill Alpha of the area is set to 0. On roll-out, the info window is removed and the fill Alpha is returned to the default, 0.2. The polygons display and the InfoWindow is added and removed correctly. The problem is that the change in fill alpha only occurs on the very last polygon in the list. So for example, if I have polygons A, B, C, and D. If I rollover A, then A's alpha should change. But, instead D's alpha changes. No matter which polygon I rollover, the last polygon's alpha changes. It's weird, because the infoWindows behave correctly on rollover. So, if I rollover polygon A, the correct information for InfoWindow A appears. Please see the code below: private function allEncodedPolygons(event:MouseEvent) : void { var myPaneManager:IPaneManager = map.getPaneManager(); var myMapPane:IPane = myPaneManager.createPane(); if (allHoodsToggle.selected) { map.clearOverlays(); mapType.selectedIndex = -1; for each (var neighbNode:XML in detailMapResultData){ outlinePolygon = this.createPoly(neighbNode); map.addOverlay(outlinePolygon)}; allHoodsToggle.removeEventListener(MouseEvent.CLICK, allEncodedPolygons); } else {myPaneManager.clearOverlays(); allHoodsToggle.removeEventListener(MouseEvent.CLICK, allEncodedPolygons); } } The function below creates the polygons and has the rollover function: private var neighbShapes:Polygon; private function createPoly(neighbNode:XML):Polygon { var optionsDefault:PolygonOptions = new PolygonOptions( { strokeStyle: {thickness: 5, color: 0xFFFF00, alpha: 0.4, pixelHinting: true}, fillStyle: { alpha: 0.2 }} ); var neighbCenterLat:Number = neighbNode.latitudeCenter.toString(); var neighbCenterLong:Number = neighbNode.longitudeCenter.toString(); var neighbCenter:LatLng = new LatLng(neighbCenterLat,neighbCenterLong); var optionsHover:PolygonOptions = new PolygonOptions( { fillStyle: { alpha: 0.0 }} ); var encodedData:EncodedPolylineData = new EncodedPolylineData(neighbNode.encoding.toString(), neighbNode.zoomFactor.toString(), neighbNode.level.toString(), neighbNode.numlevels.toString()); var encodedList:Array = [encodedData]; neighbShapes = Polygon.fromEncoded(encodedList, optionsDefault); neighbShapes.addEventListener(MapMouseEvent.CLICK, function(event:MapMouseEvent): void { map.openInfoWindow(event.latLng, new InfoWindowOptions({content: neighbNode.name.toString(), hasCloseButton:false, hasShadow:true})); }); neighbShapes.addEventListener(MapMouseEvent.ROLL_OVER, function(event:MapMouseEvent): void { neighbShapes.setOptions(optionsHover); map.openInfoWindow(neighbCenter, new InfoWindowOptions({content: neighbNode.name.toString(), hasCloseButton:false, hasShadow:false})); }); neighbShapes.addEventListener(MapMouseEvent.ROLL_OUT, function(event:MapMouseEvent): void { neighbShapes.setOptions(optionsDefault); }); return neighbShapes; } Any suggestions as to why the function that changes the alpha is firing on the last polygon only, even though the InfoWindow appears correctly? If anyone has any ideas, I'd love to hear them. Thanks. -Laxmidi

    Read the article

  • IoC/DI in the face of winforms and other generated code

    - by Kaleb Pederson
    When using dependency injection (DI) and inversion of control (IoC) objects will typically have a constructor that accepts the set of dependencies required for the object to function properly. For example, if I have a form that requires a service to populate a combo box you might see something like this: // my files public interface IDataService { IList<MyData> GetData(); } public interface IComboDataService { IList<MyComboData> GetComboData(); } public partial class PopulatedForm : BaseForm { private IDataService service; public PopulatedForm(IDataService service) { //... InitializeComponent(); } } This works fine at the top level, I just use my IoC container to resolve the dependencies: var form = ioc.Resolve<PopulatedForm>(); But in the face of generated code, this gets harder. In winforms a second file composing the rest of the partial class is generated. This file references other components, such as custom controls, and uses no-args constructors to create such controls: // generated file: PopulatedForm.Designer.cs public partial class PopulatedForm { private void InitializeComponent() { this.customComboBox = new UserCreatedComboBox(); // customComboBox has an IComboDataService dependency } } Since this is generated code, I can't pass in the dependencies and there's no easy way to have my IoC container automatically inject all the dependencies. One solution is to pass in the dependencies of each child component to PopulatedForm even though it may not need them directly, such as with the IComboDataService required by the UserCreatedComboBox. I then have the responsibility to make sure that the dependencies are provided through various properties or setter methods. Then, my PopulatedForm constructor might look as follows: public PopulatedForm(IDataService service, IComboDataService comboDataService) { this.service = service; InitializeComponent(); this.customComboBox.ComboDataService = comboDataService; } Another possible solution is to have the no-args constructor to do the necessary resolution: public class UserCreatedComboBox { private IComboDataService comboDataService; public UserCreatedComboBox() { if (!DesignMode && IoC.Instance != null) { comboDataService = Ioc.Instance.Resolve<IComboDataService>(); } } } Neither solution is particularly good. What patterns and alternatives are available to more capably handle dependency-injection in the face of generated code? I'd love to see both general solutions, such as patterns, and ones specific to C#, Winforms, and Autofac.

    Read the article

  • How do I record video to a local disk in AIR?

    - by Jim OHalloran
    I'm trying to record a webcam's video and audio to a FLV file stored on the users local hard disk. I have a version of this code working which uses NetConnection and NetStream to stream the video over a network to a FMS (Red5) server, but I'd like to be able to store the video locally for low bandwidth/flaky network situations. I'm using FLex 3.2 and AIR 1.5, so I don't believe there should be any sandbox restrictions which prevent this from occurring. Things I've seen: FileStream - Allows reading.writing local files but no .attachCamera and .attachAudio methids for creating a FLV. flvrecorder - Produces screen grabs from the web cam and creates it's own flv file. Doesn't support Audio. License prohibits commercial use. SimpleFLVWriter.as - Similar to flvrecorder without the wierd license. Doesn't support audio. This stackoverflow post - Which demonstrates the playback of a video from local disk using a NetConnection/NetStream. Given that I have a version already which uses NetStream to stream to the server I thought the last was most promising and went ahead and put together this demo application. The code compiles and runs without errors, but I don't have a FLV file on disk which the stop button is clicked. - <mx:Script> <![CDATA[ private var _diskStream:NetStream; private var _diskConn:NetConnection; private var _camera:Camera; private var _mic:Microphone; public function cmdStart_Click():void { _camera = Camera.getCamera(); _camera.setQuality(144000, 85); _camera.setMode(320, 240, 15); _camera.setKeyFrameInterval(60); _mic = Microphone.getMicrophone(); videoDisplay.attachCamera(_camera); _diskConn = new NetConnection(); _diskConn.connect(null); _diskStream = new NetStream(_diskConn); _diskStream.client = this; _diskStream.attachCamera(_camera); _diskStream.attachAudio(_mic); _diskStream.publish("file://c:/test.flv", "record"); } public function cmdStop_Click() { _diskStream.close(); videoDisplay.close(); } ]]> </mx:Script> <mx:VideoDisplay x="10" y="10" width="320" height="240" id="videoDisplay" /> <mx:Button x="10" y="258" label="Start" click="cmdStart_Click()" id="cmdStart"/> <mx:Button x="73" y="258" label="Stop" id="cmdStop" click="cmdStop_Click()"/> </mx:WindowedApplication> It seems to me that there's either something wrong with the above code which is preventing it from working, or NetStream just can't be abused in this wany to record video. What I'd like to know is, a) What (if anything) is wrong with the code above? b) If NetStream doesn't support recording to disk, are there any other alternatives which capture Audio AND Video to a file on the users local hard disk? Thanks in advance!

    Read the article

  • Is it normal for a programmer with 2 years experience to take a long time to code simple programs?

    - by ajax81
    Hi all, I'm a relatively new programmer (18 months on the scene), and I'm finally getting to the point where I'm comfortable accepting projects and developing solutions under minimal supervision. Unfortunately, this also means that I've become acutely aware of my performance shortfalls, the most prevalent of which is the amount of time it takes me to develop, test, and submit algorithms for review. A great example of what I'm talking about occurred this week when I was tasked with developing a simple XML web service (asp.net 3.5) callable via client-side JavaScript, that accepts a single parameter and returns a dataset output to a modal window (please note this is the first time I've had to develop a web service and have had ZERO experience creating/consuming them...let alone calling them from JS client side). Keeping a long story short -- I worked on it for 4 days straight, all day each day, for a grand total of 36 hours, not including the time I spent dwelling on the problem in the shower, the morning commute, and laying awake in bed at night. I learned a great deal about web services and xml/json/javascript...but was called in for a management review to discuss the length of time it took me to develop the solution. In the meeting, I was praised for the quality of my work and was in fact told that my effort was commendable. However, they (senior leads and pm's) weren't impressed with the amount of time it took me to develop the solution and expressed that they would have liked to see the solution in roughly 1/3 of the time it took me. I guess what concerns me the most is that I've identified this pattern as common for myself. Between online videos, book research, and trial/error coding...if its something I haven't seen before, I can spend up to two weeks on a problem that seems to only take the pros in the videos moments to code up. And of course, knowing that management isn't happy with this pattern has shaken me up a bit. To sum up, I have some very specific questions I'd like to ask, and would greatly appreciate your objective professional feedback. Is my experience as a junior programmer common among new developers? Or is it possible that I'm just not cut out for the work? If you suspect that my experience is not common and that there may be an aptitude issue, do you have any suggestions/solutions that I could propose to management to help bring me up to speed? Do seasoned, professional programmers ever encounter knowledge barriers that considerably delay deliverables? When you started out in the industry, did you know how to "do it all"? If not, how long did it take you to be perceived as "proficient"? Was it a natural progression of trial and error, or was there a particular zen moment when you knew you had achieved super saiyen power level? Anyways, thanks for taking the time to read my question(s). I don't know if this is the right place to ask for professional career guidance, but I greatly appreciate your willingness to help me out. Cheers, Daniel

    Read the article

  • Java thread dump where main thread has no call stack? (jsvc)

    - by dwhsix
    We have a java process running as a daemon (under jsvc). Every several days it just stops doing any work; output to the logfile stops (it is pretty verbose, on 5-minute intervals) and it consumes no CPU or IO. There are no exceptions logged in the logfile nor in syserr or sysout. The last log statement is just prior to a db commit being done, but there is no open connection on the db server (MySQL) and reviewing the code, there should always be additional log output after that, even if it had encountered an exception that was going to bubble up. The most curious thing I find is that in the thread dump (included below), there's no thread in our code at all, and the main thread seems to have no context whatsoever: "main" prio=10 tid=0x0000000000614000 nid=0x445d runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE As noted earlier, this is a daemon process running using jsvc, but I don't know if that has anything to do with it (I can restructure the code to also allow running it directly, to test). Any suggestions on what might be happening here? Thanks... dwh Full thread dump: Full thread dump Java HotSpot(TM) 64-Bit Server VM (14.2-b01 mixed mode): "MySQL Statement Cancellation Timer" daemon prio=10 tid=0x00002aaaf81b8800 nid=0x447b in Object.wait() [0x00002aaaf6a22000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00002aaab5556d50> (a java.util.TaskQueue) at java.lang.Object.wait(Object.java:485) at java.util.TimerThread.mainLoop(Timer.java:483) - locked <0x00002aaab5556d50> (a java.util.TaskQueue) at java.util.TimerThread.run(Timer.java:462) "Low Memory Detector" daemon prio=10 tid=0x00000000006a4000 nid=0x4479 runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE "CompilerThread1" daemon prio=10 tid=0x00000000006a1000 nid=0x4477 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "CompilerThread0" daemon prio=10 tid=0x000000000069d000 nid=0x4476 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "Signal Dispatcher" daemon prio=10 tid=0x000000000069b000 nid=0x4465 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "Finalizer" daemon prio=10 tid=0x0000000000678800 nid=0x4464 in Object.wait() [0x00002aaaf61d6000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00002aaab54a1cb8> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118) - locked <0x00002aaab54a1cb8> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134) at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159) "Reference Handler" daemon prio=10 tid=0x0000000000676800 nid=0x4463 in Object.wait() [0x00002aaaf60d5000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00002aaab54a1cf0> (a java.lang.ref.Reference$Lock) at java.lang.Object.wait(Object.java:485) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116) - locked <0x00002aaab54a1cf0> (a java.lang.ref.Reference$Lock) "main" prio=10 tid=0x0000000000614000 nid=0x445d runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE "VM Thread" prio=10 tid=0x0000000000670000 nid=0x4462 runnable "GC task thread#0 (ParallelGC)" prio=10 tid=0x000000000061e000 nid=0x445e runnable "GC task thread#1 (ParallelGC)" prio=10 tid=0x0000000000620000 nid=0x445f runnable "GC task thread#2 (ParallelGC)" prio=10 tid=0x0000000000622000 nid=0x4460 runnable "GC task thread#3 (ParallelGC)" prio=10 tid=0x0000000000623800 nid=0x4461 runnable "VM Periodic Task Thread" prio=10 tid=0x00000000006a6800 nid=0x447a waiting on condition JNI global references: 797 Heap PSYoungGen total 162944K, used 48388K [0x00002aaadff40000, 0x00002aaaf2ab0000, 0x00002aaaf5490000) eden space 102784K, 47% used [0x00002aaadff40000,0x00002aaae2e81170,0x00002aaae63a0000) from space 60160K, 0% used [0x00002aaaeb850000,0x00002aaaeb850000,0x00002aaaef310000) to space 86720K, 0% used [0x00002aaae63a0000,0x00002aaae63a0000,0x00002aaaeb850000) PSOldGen total 699072K, used 699072K [0x00002aaab5490000, 0x00002aaadff40000, 0x00002aaadff40000) object space 699072K, 100% used [0x00002aaab5490000,0x00002aaadff40000,0x00002aaadff40000) PSPermGen total 21248K, used 9252K [0x00002aaab0090000, 0x00002aaab1550000, 0x00002aaab5490000) object space 21248K, 43% used [0x00002aaab0090000,0x00002aaab09993e8,0x00002aaab1550000)

    Read the article

  • Why do we need different CPU architecture for server & mini/mainframe & mixed-core?

    - by claws
    Hello, I was just wondering what other CPU architectures are available other than INTEL & AMD. So, found List of CPU architectures on Wikipedia. It categorizes notable CPU architectures into following categories. Embedded CPU architectures Microcomputer CPU architectures Workstation/Server CPU architectures Mini/Mainframe CPU architectures Mixed core CPU architectures I was analyzing the purposes and have few doubts. I taking Microcomputer CPU (PC) architecture as reference and comparing others. Embedded CPU architecture: They are a completely new world. Embedded systems are small & do very specific task mostly real time & low power consuming so we do not need so many & such wide registers available in a microcomputer CPU (typical PC). In other words we do need a new small & tiny architecture. Hence new architecture & new instruction RISC. The above point also clarifies why do we need a separate operating system (RTOS). Workstation/Server CPU architectures I don't know what is a workstation. Someone clarify regarding the workstation. As of the server. It is dedicated to run a specific software (server software like httpd, mysql etc.). Even if other processes run we need to give server process priority therefore there is a need for new scheduling scheme and thus we need operating system different than general purpose one. If you have any more points for the need of server OS please mention. But I don't get why do we need a new CPU Architecture. Why cant Microcomputer CPU architecture do the job. Can someone please clarify? Mini/Mainframe CPU architectures Again I don't know what are these & what miniframes or mainframes used for? I just know they are very big and occupy complete floor. But I never read about some real world problems they are trying to solve. If any one working on one of these. Share your knowledge. Can some one clarify its purpose & why is it that microcomputer CPU archicture not suitable for it? Is there a new kind of operating system for this too? Why? Mixed core CPU architectures Never heard of these. If possible please keep your answer in this format: XYZ CPU architectures Purpose of XYZ Need for a new architecture. why can't current microcomputer CPU architecture work? They go upto 3GHZ & have upto 8 cores. Need for a new Operating System Why do we need a new kind of operating system for this kind of archictures?

    Read the article

  • Absolute reRendering using RichFaces

    - by wheelie
    Hey there, I am implementing copy/paste functionality for a complex object tree, this means you can copy an object and paste it where the object type is the same. Therefore I need to reRender the <a4j:commandLink>-s which are performing the paste action (so it will show on the GUI or not). Simplified example: Problem is that copy links are deep in the tree. How is it possible to reRender on a higher level in the component tree? (very)Simplified example: ... <h:form id="form1"> ... <a4j:commandLink value="Copy" reRender=":paste1, :paste2, :paste3" /> <a4j:commandLink id="paste1" value="Paste" rendered="#{myBean.myHashMap.key}" /> <a4j:outputPanel> <a4j:region renderRegionOnly="true"> <a4j:commandLink value="Copy" reRender=":paste1, :paste2, :paste3" /> <a4j:commandLink id="paste2" value="Paste" rendered="#{myBean.myHashMap.key}" /> </a4j:region> <a4j:outputPanel> <a4j:region renderRegionOnly="true"> <a4j:commandLink value="Copy" reRender=":paste1, :paste2, :paste3" /> <a4j:commandLink id="paste3" value="Paste" rendered="#{myBean.myHashMap.key}" /> </a4j:region> </a4j:outputPanel> </a4j:outputPanel> ... </h:form> Something like that. In practise this differs in that a rich:tree is displayed. Also, there can be multiple instances of the same paste link: object:0::paste3, object:1::paste3. private final String pasteIDs = ":xxPaste, ... , :xyPaste"; According to the RichFaces reference, putting the separator to the beginning of the ID means it is an "absolute" search expression, however this way i get the same result: only the 'local' paste link gets rerendered, the others not. Every copy-paste link pair is encapsulated in <a4j:region renderRegionOnly="true">, because it is necessary for other components to restrict the reRender to that region. Could this be blocking the reRender I want to make? Also I want to rerender exactly those paste links, so no other rerender action is triggered. Hope it is clear what i want to achieve. Any help would be appreciated! Daniel

    Read the article

  • Salesforce consuming XML and display data in Visualforce report

    - by JavaKungFu
    Firstly, this question requires a bit of introduction so please bear with me. The high level is that I am connecting to a outside web service which will return some XML to my apex controller. The idea is that I want to display the XML returned into a nice tabular format in a VisualForce page. The format of the XML coming back will look something like this: <Wrapper><reportTable name='table_id' title='Report Title'> <row> <Element1><![CDATA[campaign_id]]></Element1> <Element2><![CDATA[577373]]></Element2> <Element3><![CDATA[4129]]></Element3> <Element4 dataFormat='2' dataSuffix='%'><![CDATA[0.7151]]></Element4> <Element5><![CDATA[2010-04-04]]></Element5> <Element6><![CDATA[2010-05-03]]></Element6> </row> </reportTable> ... Now currently I am using the XMLdom utility class (developed by SF for XML functions) to map this data into a custom object "reportTable" which contains a list of "row" custom objects. The reason I am building it out this way is because I don't know how many elements will be in each row, nor the number of rows. The Visualforce page looks something like this: <table><apex:repeat value="{!reportTables}" var="table"> <apex:repeat value="{!table.rows}" var="row"> <tr> <apex:repeat value="{!row.ColumnValue}" var="column"> <apex:repeat value="{!column}" var="value"> <td> <apex:outputText value="{!value}" /> </td> </apex:repeat> </apex:repeat> </tr> </apex:repeat> Questions are: 1) Does this seem like a good approach to the problem? 2) Is there a simpler/better way to consume the XML besides writing my own custom objects to map VF to? Open to any and all suggestions. I really hope there is a better way than building the HTML table myself, as then I also have to deal with styling and alignment etc. Thanks.

    Read the article

  • Is this way of storing typed objects in memory good?

    - by Pindatjuh
    This is an "is this okay, or can it be done better" question. Topic: Storing typed objects in memory. Background information: I'm building a compiler for the x86-32 platform for my language. My goal includes typed objects. Idea: Every primitive is a semi-class (it can be used as if it was a normal class, but it's stored more compact). Every class is represented by primitives and some meta-data (containing class-properties, inheritance stuff, etc.). The meta-data is complex: it doesn't use fields but instead context-switches. For primitives, the meta-data is very small, compared to a "real" class, which is alot bigger. This enables another idea that "primitives are objects", in my language, which I found nessecairy. Example: If I have an array of 32 booleans, then the pure content of this array is exactly 4 byte (32 bits of booleans). The meta-data will contain flags that the type is an array of booleans, which contains 32 entries. The meta-data is very compacted, on bit-level: using a sort of "packing" mechanism, which is read by a FSM at runtime, when doing inspection of the type (like when passing the object to methods for checking, etc.) For instance (read from left to right, top to bottom, remember vertical possition when going to the right, and check nearest column header for meaning of switch): Primitive? Array? Type-Meta 1 Byte? || Size (1 byte) 1 1 [...] 1 [...] done 0 2 Bytes? || Size (2 bytes) 1 [...] done || Size (4 bytes) 0 [...] done Integer? 1 Byte? 2 Bytes? 0 1 0 1 done 1 done 0 done Boolean? Byte? 0 1 0 done 1 done More-Primitives 0 .... Class-Stuff (Huge) 0 ... (After reaching done the data is inserted. || = byte alignement. [...] is variable sized. ... is not described here, for simplicity. And let's call them cost-based-data-structures.) For an array of 32 booleans containing all true values, the memory for this type would be (read top-down): 1 Primitive 1 Array 1 ArrayType: Primitive 0 Not-Array 0 Not-Integer 1 Boolean 0 Not-Byte (thus bit) 1 Integer Size: 1 Byte 00100000 Array size 11111111 11111111 11111111 11111111 Data Thus, 8 bytes represent 32 booleans in an array: 11100101 00100000 11111111 11111111 11111111 11111111 Is this okay, or can it be done better?

    Read the article

  • Xen DomU on DRBD device: barrier errors

    - by Halfgaar
    I'm testing setting up a Xen DomU with a DRBD storage for easy failover. Most of the time, immediatly after booting the DomU, I get an IO error: [ 3.153370] EXT3-fs (xvda2): using internal journal [ 3.277115] ip_tables: (C) 2000-2006 Netfilter Core Team [ 3.336014] nf_conntrack version 0.5.0 (3899 buckets, 15596 max) [ 3.515604] init: failsafe main process (397) killed by TERM signal [ 3.801589] blkfront: barrier: write xvda2 op failed [ 3.801597] blkfront: xvda2: barrier or flush: disabled [ 3.801611] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801630] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801642] Buffer I/O error on device xvda2, logical block 6521396 [ 3.801652] lost page write due to I/O error on xvda2 [ 3.801755] Aborting journal on device xvda2. [ 3.804415] EXT3-fs (xvda2): error: ext3_journal_start_sb: Detected aborted journal [ 3.804434] EXT3-fs (xvda2): error: remounting filesystem read-only [ 3.814754] journal commit I/O error [ 6.973831] init: udev-fallback-graphics main process (538) terminated with status 1 [ 6.992267] init: plymouth-splash main process (546) terminated with status 1 The manpage of drbdsetup says that LVM (which I use) doesn't support barriers (better known as tagged command queuing or native command queing), so I configured the drbd device not to use barriers. This can be seen in /proc/drbd (by "wo:f, meaning flush, the next method drbd chooses after barrier): 3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---- ns:2160152 nr:520204 dw:2680344 dr:2678107 al:3549 bm:9183 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 And on the other host: 3: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r---- ns:0 nr:2160152 dw:2160152 dr:0 al:0 bm:8052 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 I also enabled the option disable_sendpage, as per the drbd docs: cat /sys/module/drbd/parameters/disable_sendpage Y I also tried adding barriers=0 to fstab as mount option. Still it sometimes says: [ 58.603896] blkfront: barrier: write xvda2 op failed [ 58.603903] blkfront: xvda2: barrier or flush: disabled I don't even know if ext3 has a nobarrier option. And, because only one of my storage systems is battery backed, it would not be smart anyway. Why does it still compain about barriers when I disabled that? Both host are: Debian: 6.0.4 uname -a: Linux 2.6.32-5-xen-amd64 drbd: 8.3.7 Xen: 4.0.1 Guest: Ubuntu 12.04 LTS uname -a: Linux 3.2.0-24-generic pvops drbd resource: resource drbdvm { meta-disk internal; device /dev/drbd3; startup { # The timeout value when the last known state of the other side was available. 0 means infinite. wfc-timeout 0; # Timeout value when the last known state was disconnected. 0 means infinite. degr-wfc-timeout 180; } syncer { # This is recommended only for low-bandwidth lines, to only send those # blocks which really have changed. #csums-alg md5; # Set to about half your net speed rate 60M; # It seems that this option moved to the 'net' section in drbd 8.4. (later release than Debian has currently) verify-alg md5; } net { # The manpage says this is recommended only in pre-production (because of its performance), to determine # if your LAN card has a TCP checksum offloading bug. #data-integrity-alg md5; } disk { # Detach causes the device to work over-the-network-only after the # underlying disk fails. Detach is not default for historical reasons, but is # recommended by the docs. # However, the Debian defaults in drbd.conf suggest the machine will reboot in that event... on-io-error detach; # LVM doesn't support barriers, so disabling it. It will revert to flush. Check wo: in /proc/drbd. If you don't disable it, you get IO errors. no-disk-barrier; } on host1 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.1:7792; } on host2 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.2:7792; } } DomU cfg: bootloader = '/usr/lib/xen-default/bin/pygrub' vcpus = '2' memory = '512' # # Disk device(s). # root = '/dev/xvda2 ro' disk = [ 'phy:/dev/drbd3,xvda2,w', 'phy:/dev/universe/drbdvm-swap,xvda1,w', ] # # Hostname # name = 'drbdvm' # # Networking # # fake IP for posting vif = [ 'ip=1.2.3.4,mac=00:16:3E:22:A8:A7' ] # # Behaviour # on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' In my test setup: the primary host's storage is 9650SE SATA-II RAID PCIe with battery. The secondary is software RAID1. Isn't DRBD+Xen widely used? With these problems, it's not going to work.

    Read the article

  • Experienced developer trying to get outsourcing contract with current client.

    - by Mike
    I work for a major bank as a contract software developer. I've been there a few months, and without exception this place has the worst software practices I've ever seen. The software my team makes has no formal testing, terrible code (not reusable, hard to read, etc), minimal documentation, no defined development process and an absolutely sickening amount of waste due to bureaucratic overhead. Part of my contract is to maintain a group of thousands of very poorly written batch jobs. When one of the jobs fails (read: crashes), it's a developers job to look at the source, figure out what's wrong, fix it, and check it in. There is no quality assurance process or auditing of the results whatsoever. Once the developer says "it works" a manager signs off and it goes into production. What's disturbing is that these jobs essentially grab market data and put it into a third-party risk management system, which provides the bank with critical intelligence. I've discovered the disturbing truth that this has been happening since the 90s and nobody really has evidence the system is getting the correct data! Without going into details, an issue arose on Friday that was so horrible I actually stormed out of the place. I was ready to quit, but I decided to just get out to calm my nerves and possibly go back Monday. I've been reflecting today on how to handle this. I have realized that, in probably less than 6 months, I could (with 2 other developers) remake a large component of this system. The new system would provide them with, as primary benefits, a maintainable codebase less prone to error and a solid QA framework. To do it properly I would have to be outside the bank, the internal bureaucracy is just too much. And moreover, I think a bank is fundamentally not a place that can make good software. This is my plan. Write a report explaining in depth all the problems with their current system Explain why their software practices fail and generate a tremendous amount of error and waste. Use this as the basis for claiming the project must be developed externally. Write a high level development plan, including what resources I will require Hand 1,2,3 to my manager, hopefully he passes it up the chain. Worst case he fires me, but this isn't so bad. Convinced Executive decides to award my company a contract for the new system I have 8 years experience as a software contractor and have delivered my share of successful software products, but all working in-house for small/medium sized companies. When I read this over, I think I have a dynamite plan. But since this is the first time doing something this bold so I have my doubts. My question is, is this a good idea? If you think not, please spare no detail.

    Read the article

  • Start a short video when an incoming call is detected, first case using the emulator.

    - by Emanuel
    I want to be able to start a short video on an incoming phone call. The video will loop until the call is answered. I've loaded the video onto the emulator sdcard then created the appropriate level avd with a path to the sdcard.iso file on disk. Since I'm running on a Mac OS x snow leopard I am able to confirm the contents of the sdcard. All testing has be done on the Android emulator. In a separate project TestVideo I created an activity that just launches the video from the sdcard. That works as expected. Then I created another project TestIncoming that creates an activity that creates a PhoneStateListener that overrides the onCallStateChanged(int state, String incomingNumber) method. In the onCallStateChanged() method I check if state == TelephonyManager.CALL_STATE_RINGING. If true I create an Intent that starts the video. I'm actually using the code from the TestVideo project above. Here is the code snippet. PhoneStateListener callStateListener = new PhoneStateListener() { @Override public void onCallStateChanged(int state, String incomingNumber) { if(state == TelelphonyManager.CALL_STATE_RINGING) { Intent launchVideo = new Intent(MyActivity.this, LaunchVideo.class); startActivity(launchVideo); } } }; The PhoneStateListener is added to the TelephonyManager.listen() method like so, telephonyManager.listen(callStateListener, PhoneStateListener.LISTEN_CALL_STATE); Here is the part I'm unclear on, the manifest. What I've tried is the following: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.incomingdemo" android:versionCode="1" android:versionName="1.0"> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".IncomingVideoDemo" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.ANSWER" /> <category android:name="android.intent.category.DEFAULT" /> </intent-filter> </activity> <activity android:name=".LaunchVideo" android:label="LaunchVideo"> </activity> </application> <uses-sdk android:minSdkVersion="2" /> <uses-permission android:name="android.permission.READ_PHONE_STATE"/> </manifest> I've run the debugger after setting breakpoints in the IncomingVideoDemo activity where the PhoneStateListener is created and none of the breakpoints are hit. Any insights into solving this problem is greatly appreciated. Thanks.

    Read the article

  • How to use CFNetwork to get byte array from sockets?

    - by Vic
    Hi, I'm working in a project for the iPad, it is a small program and I need it to communicate with another software that runs on windows and act like a server; so the application that I'm creating for the iPad will be the client. I'm using CFNetwork to do sockets communication, this is the way I'm establishing the connection: char ip[] = "192.168.0.244"; NSString *ipAddress = [[NSString alloc] initWithCString:ip]; /* Build our socket context; this ties an instance of self to the socket */ CFSocketContext CTX = { 0, self, NULL, NULL, NULL }; /* Create the server socket as a TCP IPv4 socket and set a callback */ /* for calls to the socket's lower-level connect() function */ TCPClient = CFSocketCreate(NULL, PF_INET, SOCK_STREAM, IPPROTO_TCP, kCFSocketDataCallBack, (CFSocketCallBack)ConnectCallBack, &CTX); if (TCPClient == NULL) return; /* Set the port and address we want to listen on */ struct sockaddr_in addr; memset(&addr, 0, sizeof(addr)); addr.sin_len = sizeof(addr); addr.sin_family = AF_INET; addr.sin_port = htons(PORT); addr.sin_addr.s_addr = inet_addr([ipAddress UTF8String]); CFDataRef connectAddr = CFDataCreate(NULL, (unsigned char *)&addr, sizeof(addr)); CFSocketConnectToAddress(TCPClient, connectAddr, -1); CFRunLoopSourceRef sourceRef = CFSocketCreateRunLoopSource(kCFAllocatorDefault, TCPClient, 0); CFRunLoopAddSource(CFRunLoopGetCurrent(), sourceRef, kCFRunLoopCommonModes); CFRelease(sourceRef); CFRunLoopRun(); And this is the way I sent the data, which basically is a byte array /* The native socket, used for various operations */ // TCPClient is a CFSocketRef member variable CFSocketNativeHandle sock = CFSocketGetNative(TCPClient); Byte byteData[3]; byteData[0] = 0; byteData[1] = 4; byteData[2] = 0; send(sock, byteData, strlen(byteData)+1, 0); Finally, as you may have noticed, when I create the server socket, I registered a callback for the kCFSocketDataCallBack type, this is the code. void ConnectCallBack(CFSocketRef socket, CFSocketCallBackType type, CFDataRef address, const void *data, void *info) { // SocketsViewController is the class that contains all the methods SocketsViewController *obj = (SocketsViewController*)info; UInt8 *unsignedData = (UInt8 *) CFDataGetBytePtr(data); char *value = (char*)unsignedData; NSString *text = [[NSString alloc]initWithCString:value length:strlen(value)]; [obj writeToTextView:text]; [text release]; } Actually, this callback is being invoked in my code, the problem is that I don't know how can I get the data that the windows client sent me, I'm expecting to receive an array of bytes, but I don't know how can I get those bytes from the data param. If anyone can help me to find a way to do this, or maybe me point me to another way to get the data from the server in my client application I would really appreciate it. Thanks.

    Read the article

  • Where to put a glossary of important terms and patterns in documentation?

    - by Tetha
    Greetings. I want to document certain patterns in the code in order to build up a consistent terminology (in order to easen communication about the software). I am, however, unsure, where to define the terms given. In order to get on the same level, an example: I have a code generator. This code generator receives a certain InputStructure from the Parser (yes, the name InputStructure might be less than ideal). This InputStructure is then transformed into various subsequent datastructures (like an abstract description of the validation process). Each of these datastructures can be either transformed into another value of the same datastructure or it can be transformed into the next datastructure. This should sound like Pipes and Filters to some degree. Given this, I call an operation which takes a datastructures and constructs a value of the same datastructure a transformation, while I call an operation which takes a datastructure and produces a different follow-up datastructure a derivation. The final step of deriving a string containing code is called emitting. (So, overall, the codegenerator takes the input-structure and transforms, transforms, derives, transforms, derives and finally emits). I think emphasizing these terms will be benefitial in communications, because then it is easy to talk about things. If you hear "transformation", you know "Ok, I only need to think about these two datastructures", if you hear "emitting", you know "Ok, I only need to know this datastructure and the target language.". However, where do I document these patterns? The current code base uses visitors and offers classes called like ValidatorTransformationBase<ResultType> (or InputStructureTransformationBase<ResultType>, and so one and so on). I do not really want to add the definition of such terms to the interfaces, because in that case, I'd have to repeat myself on each and every interface, which clearly violates DRY. I am considering to emphasize the distinction between Transformations and Derivations by adding further interfaces (I would have to think about a better name for the TransformationBase-classes, but then I could do thinks like ValidatorTransformation extends ValidatorTransformationBase<Validator>, or ValidatorDerivationFromInputStructure extends InputStructureTransformation<Validator>). I also think I should add a custom page to the doxygen documentation already existing, as in "Glossary" or "Architecture Principles", which contains such principles. The only disadvantage of this would be that a contributor will need to find this page in order to actually learn about this. Am I missing possibilities or am I judging something wrong here in your opinion? -- Regards, Tetha

    Read the article

  • Android: Getting Error: Conversion to Dalvik format failed

    - by Rupesh C
    I am building an app on android and running into an error and while searching on net, came across your posting on this and changed the eclipse.ini to increase Xms and Xmx params but still this error does not go away. I am using Eclipse IDE for Java with Andrioid SDK 2.1 on Mac OS. Please help or please point me to someone who might know. Btw, this error only happens when i add external jar files (which i need for my project). here are the list of external jar files that i have in my classpath.) // httpclient-4.0.1.jar from apache // httpcore -4.0.1.jarfrom apache // commons-codec-1.3.jar from apache //commons-logging-1.1.1.jar from apache // json_simple-1.1.jar from google Here is the complete error: UNEXPECTED TOP-LEVEL EXCEPTION: java.lang.IllegalArgumentException: already added: Lorg/apache/commons/logging/impl/AvalonLogger; [2010-05-02 21:57:05 - MyApp]     at com.android.dx.dex.file.ClassDefsSection.add(ClassDefsSection.java:123) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.dex.file.DexFile.add(DexFile.java:143) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.command.dexer.Main.processClass(Main.java:301) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.command.dexer.Main.processFileBytes(Main.java:278) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.command.dexer.Main.access$100(Main.java:56) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.command.dexer.Main$1.processFileBytes(Main.java:229) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.cf http://com.android.dx.cf.direct.ClassPathOpener.pro .direct.ClassPathOpener.processArchive(ClassPathOpener.java:244) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:130) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.cf.direct.ClassPathOpener.process(ClassPathOpener.java:108) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.command.dexer.Main.processOne(Main.java:247) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.command.dexer.Main.processAllFiles(Main.java:183) [2010-05-02 21:57:05 - MyApp]     at com.android.dx.command.dexer.Main.run(Main.java:139) [2010-05-02 21:57:05 - MyApp]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [2010-05-02 21:57:05 - MyApp]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [2010-05-02 21:57:05 - MyApp]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [2010-05-02 21:57:05 - MyApp]     at java.lang.reflect.Method.invoke(Method.java:592) [2010-05-02 21:57:05 - MyApp]     at com.android.ide.eclipse.adt.internal.sdk.DexWrapper.run(Unknown Source) [2010-05-02 21:57:05 - MyApp]     at com.android.ide.eclipse.adt.internal.build.ApkBuilder.executeDx(Unknown Source) [2010-05-02 21:57:05 - MyApp]     at com.android.ide.eclipse.adt.internal.build.ApkBuilder.build(Unknown Source) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.BuildManager$2.run(BuildManager.java:627) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:170) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:201) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.BuildManager$1.run(BuildManager.java:253) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:256) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.BuildManager.basicBuildLoop(BuildManager.java:309) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.BuildManager.build(BuildManager.java:341) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.AutoBuildJob.doBuild(AutoBuildJob.java:140) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.events.AutoBuildJob.run(AutoBuildJob.java:238) [2010-05-02 21:57:05 - MyApp]     at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) [2010-05-02 21:57:05 - MyApp] 4 errors; aborting [2010-05-02 21:57:05 - MyApp] Conversion to Dalvik format failed with error 1 Thanks, Rupesh

    Read the article

  • PHP-based LaTeX parser -- where to begin?

    - by Alex Basson
    The project: I want to build a LaTeX-to-MathML translator in PHP. Why? Because I'm a mathematician, and I want to publish math on my Drupal site. It doesn't have to translate all of LaTeX, since the basic document-level stuff is ably handled by the CMS and wouldn't be written in LaTeX to begin with; it just has to translate math written in LaTeX into math written in MathML. Although I feel as though I've done my due diligence, this doesn't seem to exist already. Maybe I'm wrong---if you know of something that would serve this purpose, by all means let me know, and thank you in advance. But assuming it doesn't exist, I guess I have to go write it myself. Here's the thing, though: I've never done anything this ambitious. I don't really know where to begin. I've used PHP for years, but just to do the standard "build a CMS with PHP and MySQL"-type of stuff. I've never attempted anything as seemingly sophisticated as translation from one language to another. I'm just dumb enough to consider doing it with regex---after all, LaTeX is a much more formal language, and it doesn't allow for nearly the kinds of pathological edge-cases, as say, HTML. But on the other hand, I'm just smart enough to realize this is probably a terrible idea: now I have two problems, and I sure don't want to end up like this guy. So if that's not the way to go (right?), what is? How should I start thinking about this problem? Am I essentially writing a LaTeX compiler in PHP, and if so, what do I need to know to do that (like, should I just go read the Purple Dragon book first?)? I'm both really excited and pretty intimidated by the prospect of this project, but hey, this is how we all learn to be programmers, right? If something we need doesn't exist, we go and build it, necessity is the mother of... you get the point. Tremendous thanks to everyone in advance for any and all guidance you can offer.

    Read the article

  • Binary Search Help

    - by aloh
    Hi, for a project I need to implement a binary search. This binary search allows duplicates. I have to get all the index values that match my target. I've thought about doing it this way if a duplicate is found to be in the middle: Target = G Say there is this following sorted array: B, D, E, F, G, G, G, G, G, G, Q, R S, S, Z I get the mid which is 7. Since there are target matches on both sides, and I need all the target matches, I thought a good way to get all would be to check mid + 1 if it is the same value. If it is, keep moving mid to the right until it isn't. So, it would turn out like this: B, D, E, F, G, G, G, G, G, G (MID), Q, R S, S, Z Then I would count from 0 to mid to count up the target matches and store their indexes into an array and return it. That was how I was thinking of doing it if the mid was a match and the duplicate happened to be in the mid the first time and on both sides of the array. Now, what if it isn't a match the first time? For example: B, D, E, F, G, G, J, K, L, O, Q, R, S, S, Z Then as normal, it would grab the mid, then call binary search from first to mid-1. B, D, E, F, G, G, J Since G is greater than F, call binary search from mid+1 to last. G, G, J. The mid is a match. Since it is a match, search from mid+1 to last through a for loop and count up the number of matches and store the match indexes into an array and return. Is this a good way for the binary search to grab all duplicates? Please let me know if you see problems in my algorithm and hints/suggestions if any. The only problem I see is that if all the matches were my target, I would basically be searching the whole array but then again, if that were the case I still would need to get all the duplicates. Thank you BTW, my instructor said we cannot use Vectors, Hash or anything else. He wants us to stay on the array level and get used to using them and manipulating them.

    Read the article

  • INSERT OR IGNORE in a trigger

    - by dan04
    I have a database (for tracking email statistics) that has grown to hundreds of megabytes, and I've been looking for ways to reduce it. It seems that the main reason for the large file size is that the same strings tend to be repeated in thousands of rows. To avoid this problem, I plan to create another table for a string pool, like so: CREATE TABLE AddressLookup ( ID INTEGER PRIMARY KEY AUTOINCREMENT, Address TEXT UNIQUE ); CREATE TABLE EmailInfo ( MessageID INTEGER PRIMARY KEY AUTOINCREMENT, ToAddrRef INTEGER REFERENCES AddressLookup(ID), FromAddrRef INTEGER REFERENCES AddressLookup(ID) /* Additional columns omitted for brevity. */ ); And for convenience, a view to join these tables: CREATE VIEW EmailView AS SELECT MessageID, A1.Address AS ToAddr, A2.Address AS FromAddr FROM EmailInfo LEFT JOIN AddressLookup A1 ON (ToAddrRef = A1.ID) LEFT JOIN AddressLookup A2 ON (FromAddrRef = A2.ID); In order to be able to use this view as if it were a regular table, I've made some triggers: CREATE TRIGGER trg_id_EmailView INSTEAD OF DELETE ON EmailView BEGIN DELETE FROM EmailInfo WHERE MessageID = OLD.MessageID; END; CREATE TRIGGER trg_ii_EmailView INSTEAD OF INSERT ON EmailView BEGIN INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.ToAddr); INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.FromAddr); INSERT INTO EmailInfo SELECT NEW.MessageID, A1.ID, A2.ID FROM AddressLookup A1, AddressLookup A2 WHERE A1.Address = NEW.ToAddr AND A2.Address = NEW.FromAddr; END; CREATE TRIGGER trg_iu_EmailView INSTEAD OF UPDATE ON EmailView BEGIN UPDATE EmailInfo SET MessageID = NEW.MessageID WHERE MessageID = OLD.MessageID; REPLACE INTO EmailView SELECT NEW.MessageID, NEW.ToAddr, NEW.FromAddr; END; The problem After: INSERT OR REPLACE INTO EmailView VALUES (1, '[email protected]', '[email protected]'); INSERT OR REPLACE INTO EmailView VALUES (2, '[email protected]', '[email protected]'); The updated rows contain: MessageID ToAddr FromAddr --------- ------ -------- 1 NULL [email protected] 2 [email protected] [email protected] There's a NULL that shouldn't be there. The corresponding cell in the EmailInfo table contains an orphaned ToAddrRef value. If you do the INSERTs one at a time, you'll see that Alice's ID in the AddressLookup table changes! It appears that this behavior is documented: An ON CONFLICT clause may be specified as part of an UPDATE or INSERT action within the body of the trigger. However if an ON CONFLICT clause is specified as part of the statement causing the trigger to fire, then conflict handling policy of the outer statement is used instead. So the "REPLACE" in the top-level "INSERT OR REPLACE" statement is overriding the critical "INSERT OR IGNORE" in the trigger program. Is there a way I can make it work the way that I wanted?

    Read the article

  • Installing my sdist from PyPI puts the files in the wrong places

    - by Tartley
    Hey. My problem is that when I upload my Python package to PyPI, and then install it from there using pip, my app breaks because it installs my files into completely different locations than when I simply install the exact same package from a local sdist. Installing from the local sdist puts files on my system like this: /Python27/ Lib/ site-packages/ gloopy-0.1.alpha-py2.7.egg/ (egg and install info files) data/ (images and shader source) doc/ (html) examples/ (.py scripts that use the library) gloopy/ (source) This is much as I'd expect, and works fine (e.g. my source can find my data dir, because they lie next to each other, just like they do in development.) If I upload the same sdist to PyPI and then install it from there, using pip, then things look very different: /Python27/ data/ (images and shader source) doc/ (html) Lib/ site-packages/ gloopy-0.1.alpha-py2.7.egg/ (egg and install info files) gloopy/ (source files) examples/ (.py scripts that use the library) This doesn't work at all - my app can't find its data files, plus obviously it's a mess, polluting the top-level /python27 directory with all my junk. What am I doing wrong? How do I make the pip install behave like the local sdist install? Is that even what I should be trying to achieve? Details I have setuptools installed, and also distribute, and I'm calling distribute_setup.use_setuptools() WindowsXP, Python2.7. My development directory looks like this: /gloopy /data (image files and GLSL shader souce read at runtime) /doc (html files) /examples (some scripts to show off the library) /gloopy (the library itself) My MANIFEST.in mentions all the files I want to be included in the sdist, including everything in the data, examples and doc directories: recursive-include data *.* recursive-include examples *.py recursive-include doc/html *.html *.css *.js *.png include LICENSE.txt include TODO.txt My setup.py is quite verbose, but I guess the best thing is to include it here, right? I also includes duplicate references to the same data / doc / examples directories as are mentioned in the MANIFEST.in, because I understand this is required in order for these files to be copied from the sdist to the system during install. NAME = 'gloopy' VERSION= __import__(NAME).VERSION RELEASE = __import__(NAME).RELEASE SCRIPT = None CONSOLE = False def main(): import sys from pprint import pprint from setup_utils import distribute_setup from setup_utils.sdist_setup import get_sdist_config distribute_setup.use_setuptools() from setuptools import setup description, long_description = read_description() config = dict( name=name, version=version, description=description, long_description=long_description, keywords='', packages=find_packages(), data_files=[ ('examples', glob('examples/*.py')), ('data/shaders', glob('data/shaders/*.*')), ('doc', glob('doc/html/*.*')), ('doc/_images', glob('doc/html/_images/*.*')), ('doc/_modules', glob('doc/html/_modules/*.*')), ('doc/_modules/gloopy', glob('doc/html/_modules/gloopy/*.*')), ('doc/_modules/gloopy/geom', glob('doc/html/_modules/gloopy/geom/*.*')), ('doc/_modules/gloopy/move', glob('doc/html/_modules/gloopy/move/*.*')), ('doc/_modules/gloopy/shapes', glob('doc/html/_modules/gloopy/shapes/*.*')), ('doc/_modules/gloopy/util', glob('doc/html/_modules/gloopy/util/*.*')), ('doc/_modules/gloopy/view', glob('doc/html/_modules/gloopy/view/*.*')), ('doc/_static', glob('doc/html/_static/*.*')), ('doc/_api', glob('doc/html/_api/*.*')), ], classifiers=[ 'Development Status :: 1 - Planning', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Operating System :: Microsoft :: Windows', 'Programming Language :: Python :: 2.7', ], # see classifiers http://pypi.python.org/pypi?:action=list_classifiers ) config.update(dict( author='Jonathan Hartley', author_email='[email protected]', url='http://bitbucket.org/tartley/gloopy', license='New BSD', ) ) if '--verbose' in sys.argv: pprint(config) setup(**config) if __name__ == '__main__': main()

    Read the article

  • trouble with boost::filesystem::wrecursive_directory_iterator

    - by Dogmatixed
    I'm trying to write a program to help me manage my iTunes library, including removing duplicates and cataloging certain things. At this point I'm still just trying to get it to walk through all the folders, and have run into a problem: I have a small amount of Japanese music, where the artist and/or album is written in Japanese characters. Because of how iTunes arranges things in its library the directories contain these characters. "shouldn't be a problem, though." I thought, because the boost::filesystem library has a wide character version of its recursive iterator. but when I actually try to use it, it seems to completely stop when it hits the first Japanese char. complete stop as in it doesn't finish printing the line, no carriage return or anything. now, I'm still pretty new to programming, so I'm assuming it's my mistake, anyone know why this is happening? here's what I think is the relevant code: fs::wrecursive_directory_iterator end_it; int i; try { for(fs::wrecursive_directory_iterator rec_it(full_path); rec_it != end_it; ++rec_it) { for(i = 0; i < rec_it.level(); i++) { out << "\t"; } out << rec_it->string() << std::endl; } } catch(std::exception e) { out << "something went wrong: " << e.what(); } and from my output file, minus some of the path: /Test Libs/Combine /Test Libs/Lib1 /Test Libs/Lib1/02 Too Long.m4a /Test Libs/Lib1/03 Like a Hitman, Like a Dancer.mp3 /Test Libs/Lib1/A Certain Ratio /Test Libs/Lib1/A Certain Ratio/Beyond Punk! /Test Libs/Lib1/A Certain Ratio/Unknown Album /Test Libs/Lib1/A Certain Ratio/Unknown Album/Do The Du.mp3 /Test Libs/Lib1/A Certain Ratio/Unknown Album/Shack Up.mp3 /Test Libs/Lib1/ finally, what I expect: /Test Libs/Combine /Test Libs/Lib1 /Test Libs/Lib1/02 Too Long.m4a /Test Libs/Lib1/03 Like a Hitman, Like a Dancer.mp3 /Test Libs/Lib1/A Certain Ratio /Test Libs/Lib1/A Certain Ratio/Beyond Punk! /Test Libs/Lib1/A Certain Ratio/Unknown Album /Test Libs/Lib1/A Certain Ratio/Unknown Album/Do The Du.mp3 /Test Libs/Lib1/A Certain Ratio/Unknown Album/Shack Up.mp3 /Test Libs/Lib1/??? /Test Libs/Lib1/Bring it on /Test Libs/Lib1/04 Bring it on.mp3 any thoughts? Thanks.

    Read the article

  • Subroutine & GoTo design

    - by sub
    I have a strange question concerning subroutines: As I'm creating a minimal language and I don't want to add high-level loops like while or for I was planning on just adding gotos to keep it Turing-Complete. Now I thought, eww - gotos - I wouldn't want to program in that language if I had to use gotos so often. So I thought about adding subroutines instead. I see the difference as the following: gotos Go to (captain obvious) a previously defined point and continue executing the program from there. Leads to hardly understandable and buggy code, I think that's a fact. subroutines Similiar: You define their starting point somewhere, as you call them the program jumps there - but the subroutine can go back to the point it was called from with return. Okay. Why didn't I just add the more function-like, nice looking subroutines? Because: In order to make return work if I call subroutines from within subroutines from within other subroutines, I'd have to use a stack containing the point where the currently running subroutine came from at top. That would then mean that I would, if I create loops using the subroutines, end up with an extremely memory-eating, overflowing stack with return locations. Not good. Don't think of my subroutines as functions. They are just gotos that return to the point they were called from, they don't actually give back values like the return x; statement in nearly all today's languages. Now to my actual questions: How can I solve the above problem with the stack overflow on loops with subroutines? Do I have to add a separate goto language construct without the return option? Assembler doesn't have loops but as I have seen myJumpPoint:, jnz, jz, retn. That means to me that there must also be a stack containing all the return locations. Am I right with that? What about long running loops then? Don't they overflow the stack/eat memory then? Am I getting the retn symbol in assembler totally wrong? If yes, please explain it to me.

    Read the article

  • Loading external pngs into an AS2 swf that is loaded into an AS3 swf wrapper

    - by James Fassett
    I have a Wrapper SWF that loads a series of AS2 movies. Each AS2 movie loads a series of .png files. AS3_wrapper.swf |-> AS2_1.swf |-> image_1.png |-> image_2.png |-> AS2_2.swf |-> image_1.png |-> image_2.png Inside of the AS2 I listen for the load of the pngs using onLoadInit and update my UI. This works fine for the first AS2 swf. But when I load the second AS2 swf the onLoadInit isn't triggered for the pngs. My guess is that the images are in a cache or something like that. I put a random string on the end of the request to try and avoid the cache but that doesn't seem to work. The code in the as2 looks roughly like this: var flagLoader:MovieClipLoader = new MovieClipLoader(); var listener:Object = new Object(); listener.onLoadInit = Delegate.create(this, handleImageLoad); flagLoader.addListener(listener); var row:MovieClip = frame1["row" + (numLoaded + 1)]; flagLoader.loadClip(predictionData[numLoaded].flag + "?r="+Math.random(), row.flag); I'm making sure to load only one image at a time (I've read anecdotal evidence loading more than one thing at a time can confuse the MovieClipLoader). For the first as2 file everything works great. When I load the second as2 file the handleImageLoad never gets called. Update: Even more perplexing is if I reload the first AS2 movie (after the second AS2 movie fails to load the images) the first AS2 movie loads the images again fine. Update 2: After trying to change from using a MovieClipLoader to polling (as was helpfully suggested) I have found some more evidence that is relevant. When I load the first AS2 files and trace from the top level clip it prints out _root. The second AS2 file when loaded traces the same _root. This lead me to check if they were clashing on names and they are. Both have a child called frame. The first one, when I trace it comes out as _root.frame as expected. The second AS2 file traces _level0.instance3.instance118.instance111.frame. I'm guessing this is related to the problem. Flash is keeping the _root of the two files the same but it is changing the locations of their children (for subsequently loaded files that have children with the same names). So either the onLoad is going to the wrong clip or the events about it loading are.

    Read the article

  • How do I use IImgCtx to get load an image with an alpha channel?

    - by fret
    I have some working code that uses IImgCtx to load images, but I can't work out how to get at the alpha channel. For images like .gif's and .png's there are transparent pixels, but using anything other than a 24-bit bitmap as a drawing surface doesn't work. For reference on the interface: http://www.codeproject.com/KB/graphics/JianImgCtxDecoder.aspx My code looks like this: IImgCtx *Ctx = 0; HRESULT hr = CoCreateInstance(CLSID_IImgCtx, NULL, CLSCTX_INPROC_SERVER, IID_IImgCtx, (LPVOID*)&Ctx); if (SUCCEEDED(hr)) { GVariant Fn = Name; hr = Ctx->Load(Fn.WStr(), 0); if (SUCCEEDED(hr)) { SIZE Size = { -1, -1 }; ULONG State = 0; while (true) { hr = Ctx->GetStateInfo(&State, &Size, false); if (SUCCEEDED(hr)) { if ((State & IMGLOAD_COMPLETE) || (State & IMGLOAD_STOPPED) || (State & IMGLOAD_ERROR)) { break; } else { LgiSleep(1); } } else break; } if (Size.cx > 0 && Size.cy > 0 && pDC.Reset(new GMemDC)) { if (pDC->Create(Size.cx, Size.cy, 32)) { HDC hDC = pDC->StartDC(); if (hDC) { RECT rc = { 0, 0, pDC->X(), pDC->Y() }; Ctx->Draw(hDC, &rc); pDC->EndDC(); } } else pDC.Reset(); } } Ctx->Release(); Where "StartDC" basically wraps CreateCompatibleDC(NULL) and "EndDC" wraps DeleteDC, with appropriate SelectObjects for the HBITMAPS etc. And pDC-Create(x, y, bit_depth) calls CreateDIBSection(...DIB_RGB_COLORS...). So it works if I create a 24 bits/pixel bitmap but has no alpha to speak of, and it leaves the 32 bits/pixel bitmap blank. Now this interface apparently is used by Internet Explorer to load images, and obviously THAT supports transparency, so I believe that it's possible to get some level of alpha out of the interface. The question is how? (I also have fall back code that will call libpng/libjpeg/my .gif loader etc)

    Read the article

  • Avoiding Redundancies in XML documents

    - by MarceloRamires
    I was working with a certain XML where there were no redundancies <person> <eye> <eye_info> <eye_color> blue </eye_color> </eye_info> </eye> <hair> <hair_info> <hair_color> blue </hair_color> </hair_info> </hair> </person> As you can see, the sub-tag eye-color makes reference to eye in it's name, so there was no need to avoid redundancies, I could get the eye color in a single line after loading the XML into a dataset: dataset.ReadXml(path); value = dataset.Tables("eye_info").Rows(0)("eye_color"); I do realise it's not the smartest way of doing so, and this situation I'm having now wasn't unforeseen. Now, let's say I have to read xml's that are in this format: <person> <eye> <info> <color> blue </color> </info> </eye> <hair> <info> <color> blue </color> </info> </hair> </person> So If I try to call it like this: dataset.ReadXml(path); value = dataset.Tables("info").Rows(0)("color"); There will be a redundancy, because I could only go as far as one up level to identify a single field in a XML with my previous method, and the 'disambiguator' is three levels above. Is there a practical way to reach with no mistake a single field given all the above (or at least a few) fields ?

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >