Search Results

Search found 14184 results on 568 pages for 'peter small'.

Page 519/568 | < Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >

  • Experienced developer trying to get outsourcing contract with current client.

    - by Mike
    I work for a major bank as a contract software developer. I've been there a few months, and without exception this place has the worst software practices I've ever seen. The software my team makes has no formal testing, terrible code (not reusable, hard to read, etc), minimal documentation, no defined development process and an absolutely sickening amount of waste due to bureaucratic overhead. Part of my contract is to maintain a group of thousands of very poorly written batch jobs. When one of the jobs fails (read: crashes), it's a developers job to look at the source, figure out what's wrong, fix it, and check it in. There is no quality assurance process or auditing of the results whatsoever. Once the developer says "it works" a manager signs off and it goes into production. What's disturbing is that these jobs essentially grab market data and put it into a third-party risk management system, which provides the bank with critical intelligence. I've discovered the disturbing truth that this has been happening since the 90s and nobody really has evidence the system is getting the correct data! Without going into details, an issue arose on Friday that was so horrible I actually stormed out of the place. I was ready to quit, but I decided to just get out to calm my nerves and possibly go back Monday. I've been reflecting today on how to handle this. I have realized that, in probably less than 6 months, I could (with 2 other developers) remake a large component of this system. The new system would provide them with, as primary benefits, a maintainable codebase less prone to error and a solid QA framework. To do it properly I would have to be outside the bank, the internal bureaucracy is just too much. And moreover, I think a bank is fundamentally not a place that can make good software. This is my plan. Write a report explaining in depth all the problems with their current system Explain why their software practices fail and generate a tremendous amount of error and waste. Use this as the basis for claiming the project must be developed externally. Write a high level development plan, including what resources I will require Hand 1,2,3 to my manager, hopefully he passes it up the chain. Worst case he fires me, but this isn't so bad. Convinced Executive decides to award my company a contract for the new system I have 8 years experience as a software contractor and have delivered my share of successful software products, but all working in-house for small/medium sized companies. When I read this over, I think I have a dynamite plan. But since this is the first time doing something this bold so I have my doubts. My question is, is this a good idea? If you think not, please spare no detail.

    Read the article

  • Why is numpy's einsum faster than numpy's built in functions?

    - by Ophion
    Lets start with three arrays of dtype=np.double. Timings are performed on a intel CPU using numpy 1.7.1 compiled with icc and linked to intel's mkl. A AMD cpu with numpy 1.6.1 compiled with gcc without mkl was also used to verify the timings. Please note the timings scale nearly linearly with system size and are not due to the small overhead incurred in the numpy functions if statements these difference will show up in microseconds not milliseconds: arr_1D=np.arange(500,dtype=np.double) large_arr_1D=np.arange(100000,dtype=np.double) arr_2D=np.arange(500**2,dtype=np.double).reshape(500,500) arr_3D=np.arange(500**3,dtype=np.double).reshape(500,500,500) First lets look at the np.sum function: np.all(np.sum(arr_3D)==np.einsum('ijk->',arr_3D)) True %timeit np.sum(arr_3D) 10 loops, best of 3: 142 ms per loop %timeit np.einsum('ijk->', arr_3D) 10 loops, best of 3: 70.2 ms per loop Powers: np.allclose(arr_3D*arr_3D*arr_3D,np.einsum('ijk,ijk,ijk->ijk',arr_3D,arr_3D,arr_3D)) True %timeit arr_3D*arr_3D*arr_3D 1 loops, best of 3: 1.32 s per loop %timeit np.einsum('ijk,ijk,ijk->ijk', arr_3D, arr_3D, arr_3D) 1 loops, best of 3: 694 ms per loop Outer product: np.all(np.outer(arr_1D,arr_1D)==np.einsum('i,k->ik',arr_1D,arr_1D)) True %timeit np.outer(arr_1D, arr_1D) 1000 loops, best of 3: 411 us per loop %timeit np.einsum('i,k->ik', arr_1D, arr_1D) 1000 loops, best of 3: 245 us per loop All of the above are twice as fast with np.einsum. These should be apples to apples comparisons as everything is specifically of dtype=np.double. I would expect the speed up in an operation like this: np.allclose(np.sum(arr_2D*arr_3D),np.einsum('ij,oij->',arr_2D,arr_3D)) True %timeit np.sum(arr_2D*arr_3D) 1 loops, best of 3: 813 ms per loop %timeit np.einsum('ij,oij->', arr_2D, arr_3D) 10 loops, best of 3: 85.1 ms per loop Einsum seems to be at least twice as fast for np.inner, np.outer, np.kron, and np.sum regardless of axes selection. The primary exception being np.dot as it calls DGEMM from a BLAS library. So why is np.einsum faster that other numpy functions that are equivalent? The DGEMM case for completeness: np.allclose(np.dot(arr_2D,arr_2D),np.einsum('ij,jk',arr_2D,arr_2D)) True %timeit np.einsum('ij,jk',arr_2D,arr_2D) 10 loops, best of 3: 56.1 ms per loop %timeit np.dot(arr_2D,arr_2D) 100 loops, best of 3: 5.17 ms per loop The leading theory is from @sebergs comment that np.einsum can make use of SSE2, but numpy's ufuncs will not until numpy 1.8 (see the change log). I believe this is the correct answer, but have not been able to confirm it. Some limited proof can be found by changing the dtype of input array and observing speed difference and the fact that not everyone observes the same trends in timings.

    Read the article

  • How to use CFNetwork to get byte array from sockets?

    - by Vic
    Hi, I'm working in a project for the iPad, it is a small program and I need it to communicate with another software that runs on windows and act like a server; so the application that I'm creating for the iPad will be the client. I'm using CFNetwork to do sockets communication, this is the way I'm establishing the connection: char ip[] = "192.168.0.244"; NSString *ipAddress = [[NSString alloc] initWithCString:ip]; /* Build our socket context; this ties an instance of self to the socket */ CFSocketContext CTX = { 0, self, NULL, NULL, NULL }; /* Create the server socket as a TCP IPv4 socket and set a callback */ /* for calls to the socket's lower-level connect() function */ TCPClient = CFSocketCreate(NULL, PF_INET, SOCK_STREAM, IPPROTO_TCP, kCFSocketDataCallBack, (CFSocketCallBack)ConnectCallBack, &CTX); if (TCPClient == NULL) return; /* Set the port and address we want to listen on */ struct sockaddr_in addr; memset(&addr, 0, sizeof(addr)); addr.sin_len = sizeof(addr); addr.sin_family = AF_INET; addr.sin_port = htons(PORT); addr.sin_addr.s_addr = inet_addr([ipAddress UTF8String]); CFDataRef connectAddr = CFDataCreate(NULL, (unsigned char *)&addr, sizeof(addr)); CFSocketConnectToAddress(TCPClient, connectAddr, -1); CFRunLoopSourceRef sourceRef = CFSocketCreateRunLoopSource(kCFAllocatorDefault, TCPClient, 0); CFRunLoopAddSource(CFRunLoopGetCurrent(), sourceRef, kCFRunLoopCommonModes); CFRelease(sourceRef); CFRunLoopRun(); And this is the way I sent the data, which basically is a byte array /* The native socket, used for various operations */ // TCPClient is a CFSocketRef member variable CFSocketNativeHandle sock = CFSocketGetNative(TCPClient); Byte byteData[3]; byteData[0] = 0; byteData[1] = 4; byteData[2] = 0; send(sock, byteData, strlen(byteData)+1, 0); Finally, as you may have noticed, when I create the server socket, I registered a callback for the kCFSocketDataCallBack type, this is the code. void ConnectCallBack(CFSocketRef socket, CFSocketCallBackType type, CFDataRef address, const void *data, void *info) { // SocketsViewController is the class that contains all the methods SocketsViewController *obj = (SocketsViewController*)info; UInt8 *unsignedData = (UInt8 *) CFDataGetBytePtr(data); char *value = (char*)unsignedData; NSString *text = [[NSString alloc]initWithCString:value length:strlen(value)]; [obj writeToTextView:text]; [text release]; } Actually, this callback is being invoked in my code, the problem is that I don't know how can I get the data that the windows client sent me, I'm expecting to receive an array of bytes, but I don't know how can I get those bytes from the data param. If anyone can help me to find a way to do this, or maybe me point me to another way to get the data from the server in my client application I would really appreciate it. Thanks.

    Read the article

  • physics game programming box2d - orientating a turret-like object using torques

    - by egarcia
    This is a problem I hit when trying to implement a game using the LÖVE engine, which covers box2d with Lua scripting. The objective is simple: A turret-like object (seen from the top, on a 2D environment) needs to orientate itself so it points to a target. The turret is on the x,y coordinates, and the target is on tx, ty. We can consider that x,y are fixed, but tx, ty tend to vary from one instant to the other (i.e. they would be the mouse cursor). The turret has a rotor that can apply a rotational force (torque) on any given moment, clockwise or counter-clockwise. The magnitude of that force has an upper limit called maxTorque. The turret also has certain rotational inertia, which acts for angular movement the same way mass acts for linear movement. There's no friction of any kind, so the turret will keep spinning if it has an angular velocity. The turret has a small AI function that re-evaluates its orientation to verify that it points to the right direction, and activates the rotator. This happens every dt (~60 times per second). It looks like this right now: function Turret:update(dt) local x,y = self:getPositon() local tx,ty = self:getTarget() local maxTorque = self:getMaxTorque() -- max force of the turret rotor local inertia = self:getInertia() -- the rotational inertia local w = self:getAngularVelocity() -- current angular velocity of the turret local angle = self:getAngle() -- the angle the turret is facing currently -- the angle of the like that links the turret center with the target local targetAngle = math.atan2(oy-y,ox-x) local differenceAngle = _normalizeAngle(targetAngle - angle) if(differenceAngle <= math.pi) then -- counter-clockwise is the shortest path self:applyTorque(maxTorque) else -- clockwise is the shortest path self:applyTorque(-maxTorque) end end ... it fails. Let me explain with two illustrative situations: The turret "oscillates" around the targetAngle. If the target is "right behind the turret, just a little clock-wise", the turret will start applying clockwise torques, and keep applying them until the instant in which it surpasses the target angle. At that moment it will start applying torques on the opposite direction. But it will have gained a significant angular velocity, so it will keep going clockwise for some time... until the target will be "just behind, but a bit counter-clockwise". And it will start again. So the turret will oscillate or even go in round circles. I think that my turret should start applying torques in the "opposite direction of the shortest path" before it reaches the target angle (like a car braking before stopping). Intuitively, I think the turret should "start applying torques on the opposite direction of the shortest path when it is about half-way to the target objective". My intuition tells me that it has something to do with the angular velocity. And then there's the fact that the target is mobile - I don't know if I should take that into account somehow or just ignore it. How do I calculate when the turret must "start braking"?

    Read the article

  • mysql index optimization for a table with multiple indexes that index some of the same columns

    - by Sean
    I have a table that stores some basic data about visitor sessions on third party web sites. This is its structure: id, site_id, unixtime, unixtime_last, ip_address, uid There are four indexes: id, site_id/unixtime, site_id/ip_address, and site_id/uid There are many different types of ways that we query this table, and all of them are specific to the site_id. The index with unixtime is used to display the list of visitors for a given date or time range. The other two are used to find all visits from an IP address or a "uid" (a unique cookie value created for each visitor), as well as determining if this is a new visitor or a returning visitor. Obviously storing site_id inside 3 indexes is inefficient for both write speed and storage, but I see no way around it, since I need to be able to quickly query this data for a given specific site_id. Any ideas on making this more efficient? I don't really understand B-trees besides some very basic stuff, but it's more efficient to have the left-most column of an index be the one with the least variance - correct? Because I considered having the site_id being the second column of the index for both ip_address and uid but I think that would make the index less efficient since the IP and UID are going to vary more than the site ID will, because we only have about 8000 unique sites per database server, but millions of unique visitors across all ~8000 sites on a daily basis. I've also considered removing site_id from the IP and UID indexes completely, since the chances of the same visitor going to multiple sites that share the same database server are quite small, but in cases where this does happen, I fear it could be quite slow to determine if this is a new visitor to this site_id or not. The query would be something like: select id from sessions where uid = 'value' and site_id = 123 limit 1 ... so if this visitor had visited this site before, it would only need to find one row with this site_id before it stopped. This wouldn't be super fast necessarily, but acceptably fast. But say we have a site that gets 500,000 visitors a day, and a particular visitor loves this site and goes there 10 times a day. Now they happen to hit another site on the same database server for the first time. The above query could take quite a long time to search through all of the potentially thousands of rows for this UID, scattered all over the disk, since it wouldn't be finding one for this site ID. Any insight on making this as efficient as possible would be appreciated :) Update - this is a MyISAM table with MySQL 5.0. My concerns are both with performance as well as storage space. This table is both read and write heavy. If I had to choose between performance and storage, my biggest concern is performance - but both are important. We use memcached heavily in all areas of our service, but that's not an excuse to not care about the database design. I want the database to be as efficient as possible.

    Read the article

  • pathinfo vs fnmatch

    - by zaf
    There was a small debate regarding the speed of fnmatch over pathinfo here : http://stackoverflow.com/questions/2692536/how-to-check-if-file-is-php I wasn't totally convinced so decided to benchmark the two functions. Using dynamic and static paths showed that pathinfo was faster. Is my benchmarking logic and conclusion valid? I include a sample of the results which are in seconds for 100,000 iterations on my machine : dynamic path pathinfo 3.79311800003 fnmatch 5.10071492195 x1.34 static path pathinfo 1.03921294212 fnmatch 2.37709188461 x2.29 Code: <pre> <?php $iterations=100000; // Benchmark with dynamic file path print("dynamic path\n"); $i=$iterations; $t1=microtime(true); while($i-->0){ $f='/'.uniqid().'/'.uniqid().'/'.uniqid().'/'.uniqid().'.php'; if(pathinfo($f,PATHINFO_EXTENSION)=='php') $d=uniqid(); } $t2=microtime(true) - $t1; print("pathinfo $t2\n"); $i=$iterations; $t1=microtime(true); while($i-->0){ $f='/'.uniqid().'/'.uniqid().'/'.uniqid().'/'.uniqid().'.php'; if(fnmatch('*.php',$f)) $d=uniqid(); } $t3 = microtime(true) - $t1; print("fnmatch $t3\n"); print('x'.round($t3/$t2,2)."\n\n"); // Benchmark with static file path print("static path\n"); $f='/'.uniqid().'/'.uniqid().'/'.uniqid().'/'.uniqid().'.php'; $i=$iterations; $t1=microtime(true); while($i-->0) if(pathinfo($f,PATHINFO_EXTENSION)=='php') $d=uniqid(); $t2=microtime(true) - $t1; print("pathinfo $t2\n"); $i=$iterations; $t1=microtime(true); while($i-->0) if(fnmatch('*.php',$f)) $d=uniqid(); $t3=microtime(true) - $t1; print("fnmatch $t3\n"); print('x'.round($t3/$t2,2)."\n\n"); ?> </pre>

    Read the article

  • Qt - Calling widget parent's slots

    - by bullettime
    I wrote a small program to test accessing a widget parent's slot. Basically, it has two classes: Widget: namespace Ui { class Widget; } class Widget : public QWidget { Q_OBJECT public: Widget(QWidget *parent = 0); ~Widget(); QLabel *newlabel; QString foo; public slots: void changeLabel(); private: Ui::Widget *ui; }; Widget::Widget(QWidget *parent) : QWidget(parent), ui(new Ui::Widget) { ui->setupUi(this); customWidget *cwidget = new customWidget(); newlabel = new QLabel("text"); foo = "hello world"; this->ui->formLayout->addWidget(newlabel); this->ui->formLayout->addWidget(cwidget); connect(this->ui->pushButton,SIGNAL(clicked()),cwidget,SLOT(callParentSlot())); connect(this->ui->pb,SIGNAL(clicked()),this,SLOT(changeLabel())); } void Widget::changeLabel(){ newlabel->setText(this->foo); } and customWidget: class customWidget : public QWidget { Q_OBJECT public: customWidget(); QPushButton *customPB; public slots: void callParentSlot(); }; customWidget::customWidget() { customPB = new QPushButton("customPB"); QHBoxLayout *hboxl = new QHBoxLayout(); hboxl->addWidget(customPB); this->setLayout(hboxl); connect(this->customPB,SIGNAL(clicked()),this,SLOT(callParentSlot())); } void customWidget::callParentSlot(){ ((Widget*)this->parentWidget())->changeLabel(); } in the main function, I simply created an instance of Widget, and called show() on it. This Widget instance has a label, a QString, an instance of customWidget class, and two buttons (inside the ui class, pushButton and pb). One of the buttons calls a slot in its own class called changeLabel(), that, as the name suggests, changes the label to whatever is set in the QString contained in it. I made that just to check that changeLabel() worked. This button is working fine. The other button calls a slot in the customWidget instance, named callParentSlot(), that in turn tries to call the changeLabel() slot in its parent. Since in this case I know that its parent is in fact an instance of Widget, I cast the return value of parentWidget() to Widget*. This button crashes the program. I made a button within customWidget to try to call customWidget's parent slot as well, but it also crashes the program. I followed what was on this question. What am I missing?

    Read the article

  • agent-based simulation: performance issue: Python vs NetLogo & Repast

    - by max
    I'm replicating a small piece of Sugarscape agent simulation model in Python 3. I found the performance of my code is ~3 times slower than that of NetLogo. Is it likely the problem with my code, or can it be the inherent limitation of Python? Obviously, this is just a fragment of the code, but that's where Python spends two-thirds of the run-time. I hope if I wrote something really inefficient it might show up in this fragment: UP = (0, -1) RIGHT = (1, 0) DOWN = (0, 1) LEFT = (-1, 0) all_directions = [UP, DOWN, RIGHT, LEFT] # point is just a tuple (x, y) def look_around(self): max_sugar_point = self.point max_sugar = self.world.sugar_map[self.point].level min_range = 0 random.shuffle(self.all_directions) for r in range(1, self.vision+1): for d in self.all_directions: p = ((self.point[0] + r * d[0]) % self.world.surface.length, (self.point[1] + r * d[1]) % self.world.surface.height) if self.world.occupied(p): # checks if p is in a lookup table (dict) continue if self.world.sugar_map[p].level > max_sugar: max_sugar = self.world.sugar_map[p].level max_sugar_point = p if max_sugar_point is not self.point: self.move(max_sugar_point) Roughly equivalent code in NetLogo (this fragment does a bit more than the Python function above): ; -- The SugarScape growth and motion procedures. -- to M ; Motion rule (page 25) locals [ps p v d] set ps (patches at-points neighborhood) with [count turtles-here = 0] if (count ps > 0) [ set v psugar-of max-one-of ps [psugar] ; v is max sugar w/in vision set ps ps with [psugar = v] ; ps is legal sites w/ v sugar set d distance min-one-of ps [distance myself] ; d is min dist from me to ps agents set p random-one-of ps with [distance myself = d] ; p is one of the min dist patches if (psugar >= v and includeMyPatch?) [set p patch-here] setxy pxcor-of p pycor-of p ; jump to p set sugar sugar + psugar-of p ; consume its sugar ask p [setpsugar 0] ; .. setting its sugar to 0 ] set sugar sugar - metabolism ; eat sugar (metabolism) set age age + 1 end On my computer, the Python code takes 15.5 sec to run 1000 steps; on the same laptop, the NetLogo simulation running in Java inside the browser finishes 1000 steps in less than 6 sec. EDIT: Just checked Repast, using Java implementation. And it's also about the same as NetLogo at 5.4 sec. Recent comparisons between Java and Python suggest no advantage to Java, so I guess it's just my code that's to blame? EDIT: I understand MASON is supposed to be even faster than Repast, and yet it still runs Java in the end.

    Read the article

  • Making a Grid in an NSView

    - by Hooligancat
    I currently have an NSView that draws a grid pattern (essentially a guide of horizontal and vertical lines) with the idea being that a user can change the spacing of the grid and the color of the grid. The purpose of the grid is to act as a guideline for the user when lining up objects. Everything works just fine with one exception. When I resize the NSWindow by dragging the resize handle, if my grid spacing is particularly small (say 10 pixels). the drag resize becomes lethargic in nature. My drawRect code for the grid is as follows: -(void)drawRect:(NSRect)dirtyRect { NSRect thisViewSize = [self bounds]; // Set the line color [[NSColor colorWithDeviceRed:0 green:(255/255.0) blue:(255/255.0) alpha:1] set]; // Draw the vertical lines first NSBezierPath * verticalLinePath = [NSBezierPath bezierPath]; int gridWidth = thisViewSize.size.width; int gridHeight = thisViewSize.size.height; int i; while (i < gridWidth) { i = i + [self currentSpacing]; NSPoint startPoint = {i,0}; NSPoint endPoint = {i, gridHeight}; [verticalLinePath setLineWidth:1]; [verticalLinePath moveToPoint:startPoint]; [verticalLinePath lineToPoint:endPoint]; [verticalLinePath stroke]; } // Draw the horizontal lines NSBezierPath * horizontalLinePath = [NSBezierPath bezierPath]; i = 0; while (i < gridHeight) { i = i + [self currentSpacing]; NSPoint startPoint = {0,i}; NSPoint endPoint = {gridWidth, i}; [horizontalLinePath setLineWidth:1]; [horizontalLinePath moveToPoint:startPoint]; [horizontalLinePath lineToPoint:endPoint]; [horizontalLinePath stroke]; } } I suspect this is entirely to do with the way that I am drawing the grid and am open to suggestions on how I might better go about it. I can see where the inefficiency is coming in, drag-resizing the NSWindow is constantly calling the drawRect in this view as it resizes, and the closer the grid, the more calculations per pixel drag of the parent window. I was thinking of hiding the view on the resize of the window, but it doesn't feel as dynamic. I want the user experience to be very smooth without any perceived delay or flickering. Does anyone have any ideas on a better or more efficient method to drawing the grid? All help, as always, very much appreciated.

    Read the article

  • Marshal.PtrToStructure (and back again) and generic solution for endianness swapping

    - by cgyDeveloper
    I have a system where a remote agent sends serialized structures (from and embedded C system) for me to read and store via IP/UDP. In some cases I need to send back the same structure types. I thought I had a nice setup using Marshal.PtrToStructure (receive) and Marshal.StructureToPtr (send). However, a small gotcha is that the network big endian integers need to be converted to my x86 little endian format to be used locally. When I'm sending them off again, big endian is the way to go. Here are the functions in question: private static T BytesToStruct<T>(ref byte[] rawData) where T: struct { T result = default(T); GCHandle handle = GCHandle.Alloc(rawData, GCHandleType.Pinned); try { IntPtr rawDataPtr = handle.AddrOfPinnedObject(); result = (T)Marshal.PtrToStructure(rawDataPtr, typeof(T)); } finally { handle.Free(); } return result; } private static byte[] StructToBytes<T>(T data) where T: struct { byte[] rawData = new byte[Marshal.SizeOf(data)]; GCHandle handle = GCHandle.Alloc(rawData, GCHandleType.Pinned); try { IntPtr rawDataPtr = handle.AddrOfPinnedObject(); Marshal.StructureToPtr(data, rawDataPtr, false); } finally { handle.Free(); } return rawData; } And a quick example structure that might be used like this: byte[] data = this.sock.Receive(ref this.ipep); Request request = BytesToStruct<Request>(ref data); Where the structure in question looks like: [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi, Pack = 1)] private struct Request { public byte type; public short sequence; [MarshalAs(UnmanagedType.ByValArray, SizeConst = 5)] public byte[] address; } What (generic) way can I swap the endianness when marshalling the structures? My need is such that the locally stored 'public short sequence' in this example will be little-endian for displaying to the user. I don't want to have to swap the endianness on a structure-specific way. My first thought was to use Reflection, but I'm not very familiar with that feature. Also, I hoped that there would be a better solution out there that somebody could point me towards. Thanks in advance :)

    Read the article

  • WCF web service Data Members defaulting to null

    - by James
    I am new to WCF and created a simple REST service to accept an order object (series of strings from XML file), insert that data into a database, and then return an order object that contains the results. To test the service I created a small web project and send over a stream created from an xml doc. The problem is that even though all of the items in the xml doc get placed into the stream, the service is nullifying some of them when it receives the data. For example lineItemId will have a value but shipment status will show null. I step through the xml creation and verify that all the values are being sent. However, if I clear the datamembers and change the names around, it can work. Any help would be appreciated. This is the interface code [ServiceContract] public interface IShipping { [OperationContract] [WebInvoke(Method = "POST", UriTemplate = "/Orders/UpdateOrderStatus/", BodyStyle=WebMessageBodyStyle.Bare)] ReturnOrder UpdateOrderStatus(Order order); } [DataContract] public class Order { [DataMember] public string lineItemId { get; set; } [DataMember] public string shipmentStatus { get; set; } [DataMember] public string trackingNumber { get; set; } [DataMember] public string shipmentDate { get; set; } [DataMember] public string delvryMethod { get; set; } [DataMember] public string shipmentCarrier { get; set; } } [DataContract] public class ReturnOrder { [DataMember(Name = "Result")] public string Result { get; set; } } This is what I'm using to send over an Order object: string lineId = txtLineItem.Text.Trim(); string status = txtDeliveryStatus.Text.Trim(); string TrackingNumber = "1x22-z4r32"; string theMethod = "Ground"; string carrier = "UPS"; string ShipmentDate = "04/27/2010"; XNamespace nsOrders = "http://tempuri.org/order"; XElement myDoc = new XElement(nsOrders + "Order", new XElement(nsOrders + "lineItemId", lineId), new XElement(nsOrders + "shipmentStatus", status), new XElement(nsOrders + "trackingNumber", TrackingNumber), new XElement(nsOrders + "delvryMethod", theMethod), new XElement(nsOrders + "shipmentCarrier", carrier), new XElement(nsOrders + "shipmentDate", ShipmentDate) ); HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://localhost:3587/Deposco.svc/wms/Orders/UpdateOrderStatus/"); request.Method = "POST"; request.ContentType = "application/xml"; try { request.ContentLength = myDoc.ToString().Length; StreamWriter sw = new StreamWriter(request.GetRequestStream()); sw.Write(myDoc); sw.Close(); using (HttpWebResponse response = (HttpWebResponse)request.GetResponse()) { StreamReader reader = new StreamReader(response.GetResponseStream()); string responseString = reader.ReadToEnd(); XDocument.Parse(responseString).Save(@"c:\DeposcoSvcWCF.xml"); } } catch (WebException wEx) { Stream errorStream = ((HttpWebResponse)wEx.Response).GetResponseStream(); string errorMsg = new StreamReader(errorStream).ReadToEnd(); }

    Read the article

  • Using a type parameter and a pointer to the same type parameter in a function template

    - by Darel
    Hello, I've written a template function to determine the median of any vector or array of any type that can be sorted with sort. The function and a small test program are below: #include <algorithm> #include <vector> #include <iostream> using namespace::std; template <class T, class X> void median(T vec, size_t size, X& ret) { sort(vec, vec + size); size_t mid = size/2; ret = size % 2 == 0 ? (vec[mid] + vec[mid-1]) / 2 : vec[mid]; } int main() { vector<double> v; v.push_back(2); v.push_back(8); v.push_back(7); v.push_back(4); v.push_back(9); double a[5] = {2, 8, 7, 4, 9}; double r; median(v.begin(), v.size(), r); cout << r << endl; median(a, 5, r); cout << r << endl; return 0; } As you can see, the median function takes a pointer as an argument, T vec. Also in the argument list is a reference variable X ret, which is modified by the function to store the computed median value. However I don't find this a very elegant solution. T vec will always be a pointer to the same type as X ret. My initial attempts to write median had a header like this: template<class T> T median(T *vec, size_t size) { sort(vec, vec + size); size_t mid = size/2; return size % 2 == 0 ? (vec[mid] + vec[mid-1]) / 2 : vec[mid]; } I also tried: template<class T, class X> X median(T vec, size_t size) { sort(vec, vec + size); size_t mid = size/2; return size % 2 == 0 ? (vec[mid] + vec[mid-1]) / 2 : vec[mid]; } I couldn't get either of these to work. My question is, can anyone show me a working implementation of either of my alternatives? Thanks for looking!

    Read the article

  • Add inner-marging to a 4 columns CSS

    - by Martín Marconcini
    I am not a CSS expert (mainly because I haven’t had the need to use much HTML/CSS stuff lately), so I came up with the following style/divs to create a 4 column layout: <style type="text/css"> <!-- .columns:after { content: "."; display: block; height: 0; clear: both; visibility: hidden; } * html .columns {height: 1%;} .columns{ display:inline-block; } .columns{ display:block; } .columns .column{ float:left; overflow:hidden; display:inline; } .columns .last{ float:right; } .col4 .first{ left: auto;width:25%; } .col4 .second{ left: auto;width:25%; } .col4 .third{ left: auto;width:25%; } .col4 .last{ left: auto;width:25%; } --> </style> note: most of this stuff comes from this google result, I just adapted it to 4 columns. The HTML then looks like this: <div class="columns col4"> <div class="column first”> SOME TEXT </div><!-- /.first -—> <div class="column second”> MORE TEXT</div><!—- /.second -—> <div class="column third”> SOME MORE TEXT </div><!—- /.third --> <div class="column last”> SOME LAST TEXT </div><!-- /.last -—> </div><!-- /.columns --> Ok, I’ve simplified that a bit (there’s a small image and some < h2 text in there too) but the thing is that I’d like to add some space between the columns. Here’s how it looks now: Do you have any idea what CSS property should I touch? note: If I add margin or padding, one column shifts down because (as I understand it) it doesn’t fit. There might be other CSSs as well, since this came in a template (I have been asked for this change, but I didn’t do any of this, as usual). Thanks for any insight.

    Read the article

  • prgramming a windows service

    - by xarzu
    I have started prgramming a windows service. I have added a notify icon from the toolbox. It has the small notify icon that appears in the systray as a member of those icons. It works so far. So far I have a blank form. I have used the DoubleClick for the notifyIcon to bring up the form (I will use the form for something later). Now I have a list of things I want to accomplish to make this work like a true windows service. First of all, if possible, I owuld like to remove the maximize and cancel button on the form. Most windos service apps that I have seen offer the ability to close the app by right-mouse-button clicking on the notify icon which brings up a menu of options. I see in the properties of the form under Misc there is an CancelButton. But I do not see how do deactivate it. In the Properties of the forum I see under Window Style there is a ControlBox option that, if I turn to false, all three buttons, (minimize, maximize and cancel) go away. These are not what i am looking for. I would not like the option for them to resize, maximize or close the form here. I suspect people will close the box intending to make the box go away while still wanting the app to run. Under the "Focus" caption in Properties, there id "Deactivate". I have created my own event/method/function for this and in debug I noticed that when you click on the x-box in the upper right corner, this function is called. The problem is that after the function is over, the app closes anyway. How do I over-ride this function? Secondly, how do you catch the right button click event on the notify icon in the systray? I can see how to create events for "Click" and "MouseClick" etc. but how so I determine which button was click? Using the right buton click is how such programs know when to pull up a menu. So I would like to know how to do this as well.

    Read the article

  • Programming a windows service

    - by xarzu
    I have started prgramming a windows service. I have added a notify icon from the toolbox. It has the small notify icon that appears in the systray as a member of those icons. It works so far. So far I have a blank form. I have used the DoubleClick for the notifyIcon to bring up the form (I will use the form for something later). Now I have a list of things I want to accomplish to make this work like a true windows service. First of all, if possible, I owuld like to remove the maximize and cancel button on the form. Most windos service apps that I have seen offer the ability to close the app by right-mouse-button clicking on the notify icon which brings up a menu of options. I see in the properties of the form under Misc there is an CancelButton. But I do not see how do deactivate it. In the Properties of the forum I see under Window Style there is a ControlBox option that, if I turn to false, all three buttons, (minimize, maximize and cancel) go away. These are not what i am looking for. I would not like the option for them to resize, maximize or close the form here. I suspect people will close the box intending to make the box go away while still wanting the app to run. Under the "Focus" caption in Properties, there id "Deactivate". I have created my own event/method/function for this and in debug I noticed that when you click on the x-box in the upper right corner, this function is called. The problem is that after the function is over, the app closes anyway. How do I over-ride this function? Secondly, how do you catch the right button click event on the notify icon in the systray? I can see how to create events for "Click" and "MouseClick" etc. but how so I determine which button was click? Using the right buton click is how such programs know when to pull up a menu. So I would like to know how to do this as well.

    Read the article

  • Ajax problem not displaying data using multiple javascript calls...

    - by Ronedog
    I'm writing an app that uses ajax to retrieve data from a mysql db using php. Because of the nature of the app, the user clicks an href link that has an "onclick" event used to call the javascript/ajax. I'm retrieving the data from mysql, then calling a separate php function which creates a small html table with the necessary data in it. The new table gets passed back to the responseText and is displayed inside a div tag. The tables only have around 10-20 rows of data in them. This functionality is working fine and displays the data in html form exactly as it needs to be on the page. The problem is this. the HREF "onclick" event needs to run multiple scripts one right after the other. The first script updates the "existing" data and inside the "update_existing" function is a call to refresh a section of the page with the updated HTML from the responseText. Then when that is done a "display_html" function is called which also updates a different section of the page with it's newly created HTML table. The event looks like this: Update This string gets built dynamically using php with parameters supplied, but for this example I simply took the parameters out so it didn't get confusing. The "update_existion() function actually calls the display_html() function which updates a section of the page as needed. I need to update a different section of the page on the same click of the mouse right after the update, which is why I'm calling the display_html() again, right after it. The problem is only the last call is being updated on my screen. In other words, the 2nd function call "display_html()" executes and displays the refreshed data just fine, but the previous call to update_existing() runs and updates the database properly, but doesn't display on the screen unless I press the browsers "refresh" button, which of course displays the new data exactly how I want it to, but I don't want the users to have to press the "refresh" button. I tried adding multiple "display_html() calls one right after the other, separating all of them with the semicolon and learned that only the very last function call actually refreshed the div element on the html page with the table information, although all the previous display_html() calls worked, they couldn't be seen on the page without a refresh of the browser. Is this a problem with javascript, or the ajax call, or is this a limitation in the DOM that only allows one element to be updated at a time. The ajax call is asynchroneous, but I've tried both, only async works period. This is the same in both Firefox and Internet Explorer Any ideas what's going on and how to get around it so I can run these multiple scripts?

    Read the article

  • trouble with boost::filesystem::wrecursive_directory_iterator

    - by Dogmatixed
    I'm trying to write a program to help me manage my iTunes library, including removing duplicates and cataloging certain things. At this point I'm still just trying to get it to walk through all the folders, and have run into a problem: I have a small amount of Japanese music, where the artist and/or album is written in Japanese characters. Because of how iTunes arranges things in its library the directories contain these characters. "shouldn't be a problem, though." I thought, because the boost::filesystem library has a wide character version of its recursive iterator. but when I actually try to use it, it seems to completely stop when it hits the first Japanese char. complete stop as in it doesn't finish printing the line, no carriage return or anything. now, I'm still pretty new to programming, so I'm assuming it's my mistake, anyone know why this is happening? here's what I think is the relevant code: fs::wrecursive_directory_iterator end_it; int i; try { for(fs::wrecursive_directory_iterator rec_it(full_path); rec_it != end_it; ++rec_it) { for(i = 0; i < rec_it.level(); i++) { out << "\t"; } out << rec_it->string() << std::endl; } } catch(std::exception e) { out << "something went wrong: " << e.what(); } and from my output file, minus some of the path: /Test Libs/Combine /Test Libs/Lib1 /Test Libs/Lib1/02 Too Long.m4a /Test Libs/Lib1/03 Like a Hitman, Like a Dancer.mp3 /Test Libs/Lib1/A Certain Ratio /Test Libs/Lib1/A Certain Ratio/Beyond Punk! /Test Libs/Lib1/A Certain Ratio/Unknown Album /Test Libs/Lib1/A Certain Ratio/Unknown Album/Do The Du.mp3 /Test Libs/Lib1/A Certain Ratio/Unknown Album/Shack Up.mp3 /Test Libs/Lib1/ finally, what I expect: /Test Libs/Combine /Test Libs/Lib1 /Test Libs/Lib1/02 Too Long.m4a /Test Libs/Lib1/03 Like a Hitman, Like a Dancer.mp3 /Test Libs/Lib1/A Certain Ratio /Test Libs/Lib1/A Certain Ratio/Beyond Punk! /Test Libs/Lib1/A Certain Ratio/Unknown Album /Test Libs/Lib1/A Certain Ratio/Unknown Album/Do The Du.mp3 /Test Libs/Lib1/A Certain Ratio/Unknown Album/Shack Up.mp3 /Test Libs/Lib1/??? /Test Libs/Lib1/Bring it on /Test Libs/Lib1/04 Bring it on.mp3 any thoughts? Thanks.

    Read the article

  • How would you organize this Javascript?

    - by Anurag
    How do you usually organize complex web applications that are extremely rich on the client side. I have created a contrived example to indicate the kind of mess it's easy to get into if things are not managed well for big apps. Feel free to modify/extend this example as you wish - http://jsfiddle.net/NHyLC/1/ The example basically mirrors part of the comment posting on SO, and follows the following rules: Must have 15 characters minimum, after multiple spaces are trimmed out to one. If Add Comment is clicked, but the size is less than 15 after removing multiple spaces, then show a popup with the error. Indicate amount of characters remaining and summarize with color coding. Gray indicates a small comment, brown indicates a medium comment, orange a large comment, and red a comment overflow. One comment can only be submitted every 15 seconds. If comment is submitted too soon, show a popup with appropriate error message. A couple of issues I noticed with this example. This should ideally be a widget or some sort of packaged functionality. Things like a comment per 15 seconds, and minimum 15 character comment belong to some application wide policies rather than being embedded inside each widget. Too many hard-coded values. No code organization. Model, Views, Controllers are all bundled together. Not that MVC is the only approach for organizing rich client side web applications, but there is none in this example. How would you go about cleaning this up? Applying a little MVC/MVP along the way? Here's some of the relevant functions, but it will make more sense if you saw the entire code on jsfiddle: /** * Handle comment change. * Update character count. * Indicate progress */ function handleCommentUpdate(comment) { var status = $('.comment-status'); status.text(getStatusText(comment)); status.removeClass('mild spicy hot sizzling'); status.addClass(getStatusClass(comment)); } /** * Is the comment valid for submission */ function commentSubmittable(comment) { var notTooSoon = !isTooSoon(); var notEmpty = !isEmpty(comment); var hasEnoughCharacters = !isTooShort(comment); return notTooSoon && notEmpty && hasEnoughCharacters; } // submit comment $('.add-comment').click(function() { var comment = $('.comment-box').val(); // submit comment, fake ajax call if(commentSubmittable(comment)) { .. } // show a popup if comment is mostly spaces if(isTooShort(comment)) { if(comment.length < 15) { // blink status message } else { popup("Comment must be at least 15 characters in length."); } } // show a popup is comment submitted too soon else if(isTooSoon()) { popup("Only 1 comment allowed per 15 seconds."); } });

    Read the article

  • How can I call VC# webservice methods without ArgumentException?

    - by Zarius
    Currently, I'm trying to write a small tray application that will show the status and provide control of a server-side application exposed over webservice. The webservice only has 3 operations: start, stop and status. When I call any of these operations in code, they throw an ArgumentException citing "An item with the same key has already been added". I am compiling the webservice on Visual C# Express 2008, and .NET 3.5. The Code: private TelnetConnClient Conn { get { return new TelnetConnClient(); } } private bool Connected //call webservice operations { get { return Conn.Status(); } set { if(value) Conn.Start(); else Conn.Stop(); } } The Stacktrace: A first chance exception of type 'System.ArgumentException' occurred in mscorlib.dll at System.ThrowHelper.ThrowArgumentException(ExceptionResource resource) at System.Collections.Generic.Dictionary`2.Insert(TKey key, TValue value, Boolean add) at System.ServiceModel.TransactionFlowAttribute.ApplyBehavior(OperationDescription description, BindingParameterCollection parameters) at System.ServiceModel.TransactionFlowAttribute.System.ServiceModel.Description.IOperationBehavior.AddBindingParameters(OperationDescription description, BindingParameterCollection parameters) at System.ServiceModel.Description.DispatcherBuilder.AddBindingParameters(ServiceEndpoint endpoint, BindingParameterCollection parameters) at System.ServiceModel.Description.DispatcherBuilder.BuildProxyBehavior(ServiceEndpoint serviceEndpoint, BindingParameterCollection& parameters) at System.ServiceModel.Channels.ServiceChannelFactory.BuildChannelFactory(ServiceEndpoint serviceEndpoint) at System.ServiceModel.ChannelFactory.CreateFactory() at System.ServiceModel.ChannelFactory.OnOpening() at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.ChannelFactory.EnsureOpened() at System.ServiceModel.ChannelFactory`1.CreateChannel(EndpointAddress address, Uri via) at System.ServiceModel.ChannelFactory`1.CreateChannel() at System.ServiceModel.ClientBase`1.CreateChannel() at System.ServiceModel.ClientBase`1.CreateChannelInternal() at System.ServiceModel.ClientBase`1.get_Channel() at KordiaConnect.ferries.TelnetConnClient.Start() in C:\My Dropbox\Coding\RTF\KordiaConnect\KordiaConnect\Service References\ferries\Reference.cs:line 86 at coldshark.ferries.Main.set_Connected(Boolean value) in C:\My Dropbox\Coding\RTF\KordiaConnect\KordiaConnect\Main.cs:line 22 at coldshark.ferries.Main.<.ctor>b__0(Object sender, EventArgs e) in C:\My Dropbox\Coding\RTF\KordiaConnect\KordiaConnect\Main.cs:line 43 at System.Windows.Forms.NotifyIcon.OnClick(EventArgs e) at System.Windows.Forms.NotifyIcon.WmMouseUp(Message& m, MouseButtons button) at System.Windows.Forms.NotifyIcon.WndProc(Message& msg) at System.Windows.Forms.NotifyIcon.NotifyIconNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.PeekMessage(MSG& msg, HandleRef hwnd, Int32 msgMin, Int32 msgMax, Int32 remove) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run() at coldshark.ferries.Main..ctor() in C:\My Dropbox\Coding\RTF\KordiaConnect\KordiaConnect\Main.cs:line 55 I can just call the webservice from the web interface, but this application will give me a handy status notification icon, and I'd really love to know why the out-of-the-box auto-generated code fails for no particular reason.

    Read the article

  • Coldfusion Report Builder - How can you set different datasources externally between prod/staging/de

    - by Smooth Operator
    Coldfusion Report Builder is great. One small issue. We use ANT+CFANT to deploy. When we create the report, say in a datasource called MyApp_dev on a dev box. Everything works great when the report is created. We deploy the report to our staging server, which has a datasource of MyApp_Staging. That server also, may or may not, have the live app working under MyApp_Live. Ant pushes the update to Staging just great. Run the report, crashes and burns. Why? It seems the report is looking for the MyApp_Dev data_source, even though the application is using the MyApp_Staging datasource. In digging around I found a few approaches, I would like to do this one, final, ideal way from the beginning instead of having to go back to do dozens of reports differently when I have a new Aha! moment. 1) Obvious: Pass in the datasource in to the cfreport tag. Doesn't work for ColdFusion Builder Reports as of v8, or v9 as tested on Linux. 2) Most realistic option (but painful) so far: Pass in the query as an object into the ColdFusion Builder report. Let's think about this: Create the Report with the report builder to my heart's content using the RDS, etc on my local box. When I'm done, copy the query into a snippet of code, or into a database column to be dynamically be injected at runtime with correct datasource. Modify my "run report" event to find the query from the database column, insert it into another dynamic cfquery and potentially... evaluate (!?!) it? Fun side is I can set the cfquery datasource to what I would need for each environment. When I modify the report's columns in CF Report Builder, I always have to update the query in the database. Is there a snippet of code that can extract this for me? Hmm. 3) Less than ideal. Suck it up and let all the reports in staging run off the live server. Maybe copy the live data into staging (sans structural changes) to let it seem similar. Are there any eloquent ways to accomplish the above? Thanks in Advance!

    Read the article

  • Insertions into Zipper trees on XML files in Clojure

    - by ivar
    I'm confused as how to idiomatically change a xml tree accessed through clojure.contrib's zip-filter.xml. Should be trying to do this at all, or is there a better way? Say that I have some dummy xml file "itemdb.xml" like this: <itemlist> <item id="1"> <name>John</name> <desc>Works near here.</desc> </item> <item id="2"> <name>Sally</name> <desc>Owner of pet store.</desc> </item> </itemlist> And I have some code: (require '[clojure.zip :as zip] '[clojure.contrib.duck-streams :as ds] '[clojure.contrib.lazy-xml :as lxml] '[clojure.contrib.zip-filter.xml :as zf]) (def db (ref (zip/xml-zip (lxml/parse-trim (java.io.File. "itemdb.xml"))))) ;; Test that we can traverse and parse. (doall (map #(print (format "%10s: %s\n" (apply str (zf/xml-> % :name zf/text)) (apply str (zf/xml-> % :desc zf/text)))) (zf/xml-> @db :item))) ;; I assume something like this is needed to make the xml tags (defn create-item [name desc] {:tag :item :attrs {:id "3"} :contents (list {:tag :name :attrs {} :contents (list name)} {:tag :desc :attrs {} :contents (list desc)})}) (def fred-item (create-item "Fred" "Green-haired astrophysicist.")) ;; This disturbs the structure somehow (defn append-item [xmldb item] (zip/insert-right (-> xmldb zip/down zip/rightmost) item)) ;; I want to do something more like this (defn append-item2 [xmldb item] (zip/insert-right (zip/rightmost (zf/xml-> xmldb :item)) item)) (dosync (alter db append-item2 fred-item)) ;; Save this simple xml file with some added stuff. (ds/spit "appended-itemdb.xml" (with-out-str (lxml/emit (zip/root @db) :pad true))) I am unclear about how to use the clojure.zip functions appropriately in this case, and how that interacts with zip-filter. If you spot anything particularly weird in this small example, please point it out.

    Read the article

  • Reading Source Code Aloud

    - by Jon Purdy
    After seeing this question, I got to thinking about the various challenges that blind programmers face, and how some of them are applicable even to sighted programmers. Particularly, the problem of reading source code aloud gives me pause. I have been programming for most of my life, and I frequently tutor fellow students in programming, most often in C++ or Java. It is uniquely aggravating to try to verbally convey the essential syntax of a C++ expression. The speaker must give either an idiomatic translation into English, or a full specification of the code in verbal longhand, using explicit yet slow terms such as "opening parenthesis", "bitwise and", et cetera. Neither of these solutions is optimal. On the one hand, an idiomatic translation is only useful to a programmer who can de-translate back into the relevant programming code—which is not usually the case when tutoring a student. In turn, education (or simply getting someone up to speed on a project) is the most common situation in which source is read aloud, and there is a very small margin for error. On the other hand, a literal specification is aggravatingly slow. It takes far far longer to say "pound, include, left angle bracket, iostream, right angle bracket, newline" than it does to simply type #include <iostream>. Indeed, most experienced C++ programmers would read this merely as "include iostream", but again, inexperienced programmers abound and literal specifications are sometimes necessary. So I've had an idea for a potential solution to this problem. In C++, there is a finite set of keywords—63—and operators—54, discounting named operators and treating compound assignment operators and prefix versus postfix auto-increment and decrement as distinct. There are just a few types of literal, a similar number of grouping symbols, and the semicolon. Unless I'm utterly mistaken, that's about it. So would it not then be feasible to simply ascribe a concise, unique pronunciation to each of these distinct concepts (including one for whitespace, where it is required) and go from there? Programming languages are far more regular than natural languages, so the pronunciation could be standardised. Speakers of any language would be able to verbally convey C++ code, and due to the regularity and fixity of the language, speech-to-text software could be optimised to accept C++ speech with a high degree of accuracy. So my question is twofold: first, is my solution feasible; and second, does anyone else have other potential solutions? I intend to take suggestions from here and use them to produce a formal paper with an example implementation of my solution.

    Read the article

  • xslt help - my transform is not rendering correctly

    - by Hcabnettek
    Hi All, I'm trying to apply an xlst transformation using the following file. This is very basic, but I wanted to build off of this when I get it working correctly. <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl"> <xsl:template match="/"> <div> <h2>Station Inventory</h2> <hr/> <xsl:apply-templates/> </div> </xsl:template> Here is some xml I'm using for the source. <StationInventoryList xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.dummy-tmdd-address"> <StationInventory> <station-id>9940</station-id> <station-name>Zone 9940-SEB</station-name> <station-travel-direction>SEB</station-travel-direction> <detector-list> <detector> <detector-id>2910</detector-id> <detector-name>1999 West Smith Exit SEB</detector-name> </detector> <detector> <detector-id>9205</detector-id> <detector-name>CR-155 Exit SEB</detector-name> </detector> <detector> <detector-id>9710</detector-id> <detector-name>Pt of View SEB</detector-name> </detector> </detector-list> </StationInventory> </StationInventoryList> Any ideas what I'm doing wrong? The simple intent here is to make a list of station, then make a list of detectors at a station. This is a small piece of the XML. It would have multiple StationInventory elements. I'm using the data as the source for an asp:xml control and the xslt file as the transformsource. var service = new InternalService(); var result = service.StationInventory(); invXml.DocumentContent = result; invXml.TransformSource = "StationInventory.xslt"; invXml.DataBind(); Any tips are of course appreciated. Have a terrific weekend. Cheers, ~ck

    Read the article

  • Vbscript - Creating a script that mirrors several sets of folders

    - by Kenny Bones
    Ok, this is my problem. I'm doing a logonscript that basically copies Microsoft Word templates from a serverpath on to a local path of each computer. This is done using a check for group membership. If MemberOf(ObjGroupDict, "g_group1") Then oShell.Run "%comspec% /c %LOGONSERVER%\SYSVOL\mydomain.com\scripts\ROBOCOPY \\server\Templates\Group1\OFFICE2003\ " & TemplateFolder & "\" & " * /E /XO", 0, True End If Previously I used the /MIR switch of robocopy, which is exellent. But, if a user is member of more than one group, the /MIR switch removes the content from the first group, since it's mirroring the content from the second group. Meaning, I can't have both contents. This is "solved" by not using the /MIR switch and just let the content get copied anyway. BUT the whole idea of having the templates on a server is so that I can control the content the users receive through the script. So if I delete a file or folder from the server path, this doesn't replicate on the local computer. Since I don't use the /MIR switch anymore. Comprende? So, what do I do? I did a small script that basically checks the folders and files and then removes them accordingly, but this actually ended up being the same functionality as the /MIR switch anyway. How do I solve this problem? Edit: I've found that what I actually need is a routine that scans my local template folder for files and folders and checks if the same structure exists in any of the source template folders. The server template folders are set up like this: \\fileserver\templates\group1\ \\fileserver\templates\group2\ \\fileserver\templates\group3\ \\fileserver\templates\group4\ \\fileserver\templates\group5\ \\fileserver\templates\group6\ And the script that does the copying is structures like this (pseudo): If User is MemberOf (group1) Then RoboCopy.exe \\fileserver\templates\group1\ c:\templates\workgroup *.* /E /XO End if If User is MemberOf (group2) Then RoboCopy.exe \\fileserver\templates\group2\ c:\templates\workgroup *.* /E /XO End if If User is MemberOf (group3) Then RoboCopy.exe \\fileserver\templates\group3\ c:\templates\workgroup *.* /E /XO End if Etc etc With the /E switch, I make sure it copies subfolders as well. And the /XO switch only copies files and folders that are newer than those in my local path. But it doesn't consider if the local path contains files or folders that doesn't exist on the server template path. So after the copying is done, I would like to check if any of the files or folders on my c:\templates\workgroup actually exists in either of the sources. And if they don't, delete them from my local path. Something that could be combined in these memberchecks perhaps?

    Read the article

  • architecture python question

    - by tom smith
    hi. creating a distributed crawling python app. it consists of a master server, and associated client apps that will run on client servers. the purpose of the client app is to run across a targeted site, to extract specific data. the clients need to go "deep" within the site, behind multiple levels of forms, so each client is specifically geared towards a given site. each client app looks something like main: parse initial url call function level1 (data1) function level1 (data) parse the url, for data1 use the required xpath to get the dom elements call the next function call level2 (data) function level2 (data2) parse the url, for data2 use the required xpath to get the dom elements call the next function call level3 function level3 (dat3) parse the url, for data3 use the required xpath to get the dom elements call the next function call level4 function level4 (data) parse the url, for data4 use the required xpath to get the dom elements at the final function.. --all the data output, and eventually returned to the server --at this point the data has elements from each function... my question: given that the number of calls that is made to the child function by the current function varies, i'm trying to figure out the best approach. each function essentialy fetches a page of content, and then parses the page using a number of different XPath expressions, combined with different regex expressions depending on the site/page. if i run a client on a single box, as a sequential process, it'll take awhile, but the load on the box is rather small. i've thought of attempting to implement the child functions as threads from the current function, but that could be a nightmare, as well as quickly bring the "box" to its knees! i've thought of breaking the app up in a manner that would allow the master to essentially pass packets to the client boxes, in a way to allow each client/function to be run directly from the master. this process requires a bit of rewrite, but it has a number of advantages. a bunch of redundancy, and speed. it would detect if a section of the process was crashing and restart from that point. but not sure if it would be any faster... i'm writing the parsing scripts in python.. so... any thoughts/comments would be appreciated... i can get into a great deal more detail, but didn't want to bore anyone!! thanks! tom

    Read the article

< Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >