Search Results

Search found 4940 results on 198 pages for 'understanding'.

Page 182/198 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • The most efficient method of drawing multiple quads in OpenGL

    - by CPatton
    I'm not very keen with OpenGL and I was wondering if someone could give me some insight on this. I'm a 'seasoned' programmer, I've read the redbook about VBOs and the like, but I was wondering from a more experienced person about the best/most efficient way of achieving this. I've been producing this 2d tile-based game engine to be used in several projects. I have a class called "ScreenObject" which is mainly composed of a Dictionary<Point, Tile> The Point key is to show where to render the Tile on the screen, and the Tile contains one or more textures to be drawn at that point. This ScreenObject is where the tiles will be modified, deleted, added, etc.. My original method of drawing the tiles in the testing I've done was to iterate through the ScreenObject and draw each quad at each location separately. From what I've read, this is a massive waste of resources. It wasn't horribly slow in the testing, but after I've completed the animation classes and effect classes, I'm sure it would be extremely slow. And one last thing, if you wouldn't mind.. As I said before, the Tile class can contain multiple textures to be drawn at the Point location on the screen. I recognize possibly two options for me here. Either add a quad at that location for each texture to be drawn, or, somehow.. use a multiple texture for the same quad (if it's possible). Even if each tile contained one texture only, that would be 64 quads to be drawn on the screen. Most of the tiles will contain 2-5 textures, so the number of total quads would increase dramatically with this method. Would it be feasible to add a quad for each new texture, or am I ignoring a better way to do this? Just need some help understanding this if you don't mind :) I've tried to be as concise as possible, and I'd greatly appreciate any responses.. and even some criticism. Programming is often a learning process and one who develops seems to never stops learning. Thanks for your time.

    Read the article

  • node.js / socket.io, cookies only working locally

    - by Ben Griffiths
    I'm trying to use cookie based sessions, however it'll only work on the local machine, not over the network. If I remove the session related stuff, it will however work just great over the network... You'll have to forgive the lack of quality code here, I'm just starting out with node/socket etc etc, and finding any clear guides is tough going, so I'm in n00b territory right now. Basically this is so far hacked together from various snippets with about 10% understanding of what I'm actually doing... The error I see in Chrome is: socket.io.js:1632GET http://192.168.0.6:8080/socket.io/1/?t=1334431940273 500 (Internal Server Error) Socket.handshake ------- socket.io.js:1632 Socket.connect ------- socket.io.js:1671 Socket ------- socket.io.js:1530 io.connect ------- socket.io.js:91 (anonymous function) ------- /socket-test/:9 jQuery.extend.ready ------- jquery.js:438 And in the console for the server I see: debug - served static content /socket.io.js debug - authorized warn - handshake error No cookie My server is: var express = require('express') , app = express.createServer() , io = require('socket.io').listen(app) , connect = require('express/node_modules/connect') , parseCookie = connect.utils.parseCookie , RedisStore = require('connect-redis')(express) , sessionStore = new RedisStore(); app.listen(8080, '192.168.0.6'); app.configure(function() { app.use(express.cookieParser()); app.use(express.session( { secret: 'YOURSOOPERSEKRITKEY', store: sessionStore })); }); io.configure(function() { io.set('authorization', function(data, callback) { if(data.headers.cookie) { var cookie = parseCookie(data.headers.cookie); sessionStore.get(cookie['connect.sid'], function(err, session) { if(err || !session) { callback('Error', false); } else { data.session = session; callback(null, true); } }); } else { callback('No cookie', false); } }); }); var users_count = 0; io.sockets.on('connection', function (socket) { console.log('New Connection'); var session = socket.handshake.session; ++users_count; io.sockets.emit('users_count', users_count); socket.on('something', function(data) { io.sockets.emit('doing_something', data['data']); }); socket.on('disconnect', function() { --users_count; io.sockets.emit('users_count', users_count); }); }); My page JS is: jQuery(function($){ var socket = io.connect('http://192.168.0.6', { port: 8080 } ); socket.on('users_count', function(data) { $('#client_count').text(data); }); socket.on('doing_something', function(data) { if(data == '') { window.setTimeout(function() { $('#target').text(data); }, 3000); } else { $('#target').text(data); } }); $('#textbox').keydown(function() { socket.emit('something', { data: 'typing' }); }); $('#textbox').keyup(function() { socket.emit('something', { data: '' }); }); });

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • Convert a PDF to a Transparent PNG with GhostScript

    - by Jonathon Wolfe
    Hi all. I am attempting, unsuccessfully, to use Ghostscript to rasterize PDF files with a transparent background to PNG files with a transparent background. I've searched high and low for questions from others attempting the same thing and none of the posted solutions, which as far as I can tell come down to specifying -sDEVICE=pngalpha, have worked with my test files. At this point I would really appreciate any advice or tips a more experienced hand could provide. My test PDF is located here: http://www.kolossus.com/files/test.pdf It could be that the issue is with this file, but I doubt it. As far as I can tell, it has no specified background, and when I open the file with a transparency-aware app like Photoshop or Illustrator, sure enough it displays with a transparent background. However, when opened with an application like Adobe Reader the file is rendered with a white background. I believe that this has more to do with the application rendering the PDF than with the PDF itself -- apps like Adobe Reader assume you want to see what a printed document will look like and therefore always show a white canvas behind the artwork -- but I can't be sure. The gs command I'm using is: gs -dNOPAUSE -dBATCH -sDEVICE=pngalpha -r72 -sOutputFile=test.png test.pdf This produces a PNG that has transparent pixels outside of the bounding box of the artwork in the file, but all pixels that are inside the artwork's bounding box are rasterized against a white background. This is a problem for me, as my artwork has drop shadows and antialiased edges that need to be preserved in the final output, and can't just be postprocessed out with ImageMagick. A sample of my PNG output is at the same location as the pdf above, with .png at the end (stackoverflow won't let me include more than one url in my post). Interestingly, I see no effects from using the -dBackgroundColor flag, even if I set it to something non-white like -dBackgroundColor=16#ff0000. Perhaps my understanding of the syntax of this flag is wrong. Also interestingly, I see no effects from using the -dTextAlphaBits=4 -dGraphicsAlphaBits=4 flags to try to enable subpixel antialiasing. I would also appreciate any advice on how to enable subpixel antialiasing, especially on text. Finally, I'm using GPL Ghostscript 8.64 on Mac OS 10.5.7, and the rendering workflow I'm trying to get set up is to generate transparent PNG images from PDFs output by PrinceXML. I'm calling Ghostscript directly for the rasterization instead of using ImageMagick because ImageMagick delegates to Ghostscript for PDF rasterization and I should be able to control the rasterization better by calling GS directly. Thanks for your help. -Jon Wolfe

    Read the article

  • Determining what frequencies correspond to the x axis in aurioTouch sample application

    - by eagle
    I'm looking at the aurioTouch sample application for the iPhone SDK. It has a basic spectrum analyzer implemented when you choose the "FFT" option. One of the things the app is lacking is X axis labels (i.e. the frequency labels). In the aurioTouchAppDelegate.mm file, in the function - (void)drawOscilloscope at line 652, it has the following code: if (displayMode == aurioTouchDisplayModeOscilloscopeFFT) { if (fftBufferManager->HasNewAudioData()) { if (fftBufferManager->ComputeFFT(l_fftData)) [self setFFTData:l_fftData length:fftBufferManager->GetNumberFrames() / 2]; else hasNewFFTData = NO; } if (hasNewFFTData) { int y, maxY; maxY = drawBufferLen; for (y=0; y<maxY; y++) { CGFloat yFract = (CGFloat)y / (CGFloat)(maxY - 1); CGFloat fftIdx = yFract * ((CGFloat)fftLength); double fftIdx_i, fftIdx_f; fftIdx_f = modf(fftIdx, &fftIdx_i); SInt8 fft_l, fft_r; CGFloat fft_l_fl, fft_r_fl; CGFloat interpVal; fft_l = (fftData[(int)fftIdx_i] & 0xFF000000) >> 24; fft_r = (fftData[(int)fftIdx_i + 1] & 0xFF000000) >> 24; fft_l_fl = (CGFloat)(fft_l + 80) / 64.; fft_r_fl = (CGFloat)(fft_r + 80) / 64.; interpVal = fft_l_fl * (1. - fftIdx_f) + fft_r_fl * fftIdx_f; interpVal = CLAMP(0., interpVal, 1.); drawBuffers[0][y] = (interpVal * 120); } cycleOscilloscopeLines(); } } From my understanding, this part of the code is what is used to decide which magnitude to draw for each frequency in the UI. My question is how can I determine what frequency each iteration (or y value) represents inside the for loop. For example, if I want to know what the magnitude is for 6kHz, I'm thinking of adding a line similar to the following: if (yValueRepresentskHz(y, 6)) NSLog(@"The magnitude for 6kHz is %f", (interpVal * 120)); Please note that although they chose to use the variable name y, from what I understand, it actually represents the x-axis in the visual graph of the spectrum analyzer, and the value of the drawBuffers[0][y] represents the y-axis.

    Read the article

  • Sql Server Compact 2005 on Visual Studio 2008

    - by Tim
    I'm working on a Windows Forms application that interacts with a Sql Compact database file created by SQL Server 2005. This application was originally developed in Visual Studio 2005 but was recently converted to a Visual Studio 2008 solution. In regards to Sql Compact, we made sure the references were all still set to the assemblies that handle the 2005 version of Sql Compact rather than Sql Compact 3.5. Having done this, the application still runs just as it should - it will still interact with the Compact database, perform synchronization operations, etc. However, I just discovered today that Visual Studio tools such as the DataSet Designer do not play well with a Sql Compact database file of an older version than 3.5. If I go to the New Connection... wizard, the only Sql Compact Data Source / Data Provider are for Sql Compact 3.5. I assume that Visual Studio 2008 just doesn't include the data provider for the older version of Sql Compact by default. Is there a way you can add the old version of Sql Compact to the list of "Data Sources" for the connection wizard? To see exactly what I'm referring to, click on the Tools menu of Visual Studio 2008 and click Connect to Database... In the window that comes up, click Change... next to the Data source setting. From this dialog there is no way I can select the earlier version of Sql Compact - only 3.5 is available. Maybe I need to add an assembly reference somewhere? Or copy some file(s) from my Visual Studio 2005 directory over to 2008? I would think there would have to be a way for Visual Studio 2008 to be able to interact with a Sql Compact database from Sql Server 2005. To provide one more bit of detail, I discovered this problem when I went to my DataSet, right-clicked and tried to add a TableAdapter. The first screen that comes up says, "Choose Your Data Connection". If I leave it set to the Sql Compact connection that we've always used, I now get the following error when clicking the Next button: Failed to open a connection to the database "The selected database was created with an earlier version of SQL Server Compact and needs to be upgraded to SQL Server Compact 3.5 before the connection can be opened or tested. Upgrade the database by creating a new data connection and completing the Add Connection dialog box." Check the connection and try again. The only problem here is that we still use Sql Server 2005, and if my understanding is correct, it does not produce subscription files that are compatible with Sql Compact 3.5. If I am wrong in this assumption, please correct me. Any help you can provide is greatly appreciated. Thank you.

    Read the article

  • What was your the most impressive technical programming achievement performed to impress a romantic

    - by DVK
    OK, so the archetypal human story is for a guy to go out and impress the girl with some wonderful achievement like slaying a dragon or building a monument or conquering neighboring tribe. This being enlightened 21st century on SO, let's morph this into a: StackOverflower performing a feat of programming to impress a romantic interest. There are two ways to do this: Technical achievement: Impressing a person with suitable background/understanding of programming with actual coding powerss you displayed. A dumb movie example would be that kid in "Hackers" move showing off his hacking skills in front of Angeline Jolie. Artistic achievement: Impressing a person with a result of running said code, whether they understand just how incredible the code itself is. An example is the animated ANSI rose (for a guy who actually wrote the ANSI code) This question is only about the first kind (technical achievements) - e.g. the person of interest was presented with impressive code/design that (s)he was able to properly appreciate. Rules (what doesn't qualify): The target audience must have been a person of romantic interest (prospective or present significant other or random hook-up). E.g. showing your program to your sister who's also a software developer doesn't count. The achievement must have been done specifically with the goal to impress such a person. However, it is OK if the achievement was done to impress a generic qualifying person, not someone specific. Although... if you write code to impress girls in general, I'd say "get a better idea of the opposite sex" The achievement must have been done with the goal of impressing the person. In other words, if you would have done it without romantic interest's knowledge anyway, it doesn't count. As examples, the following does not count: programming for your job. Programming for a coding contest. Open Source program that you'd have done anyway. The precise nature of the awesomeness of the achievement is somewhat irrelevant - from learning entire J2EE in 2 days to writing fancy game engine to implementing Python compiler in LOGO. As long as it's programming/software development related. The achievement should preferably be something other people would rank highly as well. If your date was impressed with your skill at calculating Fibonacci sequence without recursive function calls, it doesn't mean most developers will be. But it does mean you need to start finding better things to do on dates ;)

    Read the article

  • SQL Server stored procedures - update column based on variable name..?

    - by ClarkeyBoy
    Hi, I have a data driven site with many stored procedures. What I want to eventually be able to do is to say something like: For Each @variable in sproc inputs UPDATE @TableName SET @variable.toString = @variable Next I would like it to be able to accept any number of arguments. It will basically loop through all of the inputs and update the column with the name of the variable with the value of the variable - for example column "Name" would be updated with the value of @Name. I would like to basically have one stored procedure for updating and one for creating. However to do this I will need to be able to convert the actual name of a variable, not the value, to a string. Question 1: Is it possible to do this in T-SQL, and if so how? Question 2: Are there any major drawbacks to using something like this (like performance or CPU usage)? I know if a value is not valid then it will only prevent the update involving that variable and any subsequent ones, but all the data is validated in the vb.net code anyway so will always be valid on submitting to the database, and I will ensure that only variables where the column exists are able to be submitted. Many thanks in advance, Regards, Richard Clarke Edit: I know about using SQL strings and the risk of SQL injection attacks - I studied this a bit in my dissertation a few weeks ago. Basically the website uses an object oriented architecture. There are many classes - for example Product - which have many "Attributes" (I created my own class called Attribute, which has properties such as DataField, Name and Value where DataField is used to get or update data, Name is displayed on the administration frontend when creating or updating a Product and the Value, which may be displayed on the customer frontend, is set by the administrator. DataField is the field I will be using in the "UPDATE Blah SET @Field = @Value". I know this is probably confusing but its really complicated to explain - I have a really good understanding of the entire system in my head but I cant put it into words easily. Basically the structure is set up such that no user will be able to change the value of DataField or Name, but they can change Value. I think if I were to use dynamic parameterised SQL strings there will therefore be no risk of SQL injection attacks. I mean basically loop through all the attributes so that it ends up like: UPDATE Products SET [Name] = '@Name', Description = '@Description', Display = @Display Then loop through all the attributes again and add the parameter values - this will have the same effect as using stored procedures, right?? I dont mind adding to the page load time since this is mainly going to affect the administration frontend, and will marginly affect the customer frontend.

    Read the article

  • jQuery hide ul header when all entries are deleted...

    - by Scott
    I'm a noob with jQuery...and I hope I've explained this well enough; I have a <ul> header that appears when I've added an entry to a dynamically created list using $.post. Each entry added has a delete/edit button associated with it. Header is this: <ul class="header"> <li>Month</li> <li>Year</li> <li>Cottage</li> </ul> My dynamic list that is created: <ul class="addedItems"> <li>Month</li> <li>Year</li> <li>Cottage</li> <li><span class="edit">edit</span></li> <li><span class="del">delete</span></li> </ul> This all looks like this: Month Year Cottage <--this appears after I've added an entry -------------------------------- and I want it to stick around unless all items are deleted. Dec 1990 Fir edit/delete <--entries Jan 2000 Willow edit/delete My question is: Is there some kind of conditional that I can use with jQuery to hide the class="header" if all the items are deleted? I've read up on conditional statements like is and not with jq but I'm not really understanding how they work. All of the items in class="addedItems" is stored in data produced by JSON. This is the delete function: $(".del").live("click", function(){ var del = this; var thisVal = $(del).val(); $.post("delete.php", { dirID : thisVal }, function(data){ if(confirm("Are you sure you want to DELETE this entry?") == true) { if(data.success) { //hide the class="header" here somwhere?? $(del).parents(".addedItems").hide(); } else if(data.error) { // throw error if item does not delete } } }, "json"); return false; }); //end of .del function Here is the delete.php <?php if($_POST) { $data['delID'] = $_POST['dirID']; $query = "DELETE from //tablename WHERE dirID = '{$data['delID']}' LIMIT 1"; $result = $db->query($query); if($result) { $data['success'] = true; $data['message'] = "Entry was successfully removed."; } else { $data['error'] = true; $data['message'] = "Item could not be deleted."; } echo json_encode($data); } ?>

    Read the article

  • wrapping boost::ublas with swig

    - by leon
    I am trying to pass data around the numpy and boost::ublas layers. I have written an ultra thin wrapper because swig cannot parse ublas' header correctly. The code is shown below #include <boost/numeric/ublas/vector.hpp> #include <boost/numeric/ublas/matrix.hpp> #include <boost/lexical_cast.hpp> #include <algorithm> #include <sstream> #include <string> using std::copy; using namespace boost; typedef boost::numeric::ublas::matrix<double> dm; typedef boost::numeric::ublas::vector<double> dv; class dvector : public dv{ public: dvector(const int rhs):dv(rhs){;}; dvector(); dvector(const int size, double* ptr):dv(size){ copy(ptr, ptr+sizeof(double)*size, &(dv::data()[0])); } ~dvector(){} }; with the SWIG interface that looks something like %apply(int DIM1, double* INPLACE_ARRAY1) {(const int size, double* ptr)} class dvector{ public: dvector(const int rhs); dvector(); dvector(const int size, double* ptr); %newobject toString; char* toString(); ~dvector(); }; I have compiled them successfully via gcc 4.3 and vc++9.0. However when I simply run a = dvector(array([1.,2.,3.])) it gives me a segfault. This is the first time I use swigh with numpy and not have fully understanding between the data conversion and memory buffer passing. Does anyone see something obvious I have missed? I have tried to trace through with a debugger but it crashed within the assmeblys of python.exe. I have no clue if this is a swig problem or of my simple wrapper. Anything is appreciated.

    Read the article

  • iPhone SDK / Core Data usage scenario, similar to GAE data store?

    - by boliva
    Hi all, I am currently rewriting a map based App which I wrote in the past, specifically for 2.2.1 devices. Originally I wrote it to make use of SQLite databases but I would like to try and migrate it over Core Data, now that it's available on 3.X (for which I am rewriting to). I am fairly experienced in iPhone/Obj-C development, SQL and server backend technologies, but I have never had the chance to work with Core Data so IDK really if it's the appropiate tool for what I am trying to accomplish. The App works on a limited area in a map over which there are about 4000 placemarks, with different kinds of icons and sizes. Of course not all 4000 placemarks are shown at once but only those currently visible in the map viewport, and depending on the zoom level. What I am doing right now is, after the user moves the map in any way (panning or zooming) I am requesting from the backend server the required information for the placemarks that would be visible given the viewport coordinates boundaries and zoom level, however the process isn't as smooth as I'd like (the backend is sending its response in XML and I am compressing it using gzip), it takes anywhere from 1 to 3 seconds to update the display of the placemarks after the user ends moving the map. What I would like to do is to prefetch all the placemarks data at the App launch and use it all through the app life time - I don't mind storing it for later use because the data should be dynamic. The way I would do it right now is, after retrieving all the data, to store it on an SQLite db which I would query later, whenever the user moves the map, to return only the placemarks inside the viewport coordinate boundaries and specific to a given zoom level. Now, the question itself is, if is it possible to use some more 'native', object driven way to carry this queries process, which got me thinking about Core Data and if it is in any way similar to what Google App Engine offers through its datastore where you can fetch a number of objects from the backend given a certain query or criteria, without resorting to an SQL query itself. Like I said before I don't have any experience on Core Data but I have a pretty deep understanding of Obj-C and iPhone development, as well as SQL databases. Any guides on how to achieve what I'm trying (if possible at all) would be greatly appreciated.

    Read the article

  • Regular expression works normally, but fails when placed in an XML schema

    - by Eli Courtwright
    I have a simple doc.xml file which contains a single root element with a Timestamp attribute: <?xml version="1.0" encoding="utf-8"?> <root Timestamp="04-21-2010 16:00:19.000" /> I'd like to validate this document against a my simple schema.xsd to make sure that the Timestamp is in the correct format: <?xml version="1.0" encoding="utf-8"?> <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="root"> <xs:complexType> <xs:attribute name="Timestamp" use="required" type="timeStampType"/> </xs:complexType> </xs:element> <xs:simpleType name="timeStampType"> <xs:restriction base="xs:string"> <xs:pattern value="(0[0-9]{1})|(1[0-2]{1})-(3[0-1]{1}|[0-2]{1}[0-9]{1})-[2-9]{1}[0-9]{3} ([0-1]{1}[0-9]{1}|2[0-3]{1}):[0-5]{1}[0-9]{1}:[0-5]{1}[0-9]{1}.[0-9]{3}" /> </xs:restriction> </xs:simpleType> </xs:schema> So I use the lxml Python module and try to perform a simple schema validation and report any errors: from lxml import etree schema = etree.XMLSchema( etree.parse("schema.xsd") ) doc = etree.parse("doc.xml") if not schema.validate(doc): for e in schema.error_log: print e.message My XML document fails validation with the following error messages: Element 'root', attribute 'Timestamp': [facet 'pattern'] The value '04-21-2010 16:00:19.000' is not accepted by the pattern '(0[0-9]{1})|(1[0-2]{1})-(3[0-1]{1}|[0-2]{1}[0-9]{1})-[2-9]{1}[0-9]{3} ([0-1]{1}[0-9]{1}|2[0-3]{1}):[0-5]{1}[0-9]{1}:[0-5]{1}[0-9]{1}.[0-9]{3}'. Element 'root', attribute 'Timestamp': '04-21-2010 16:00:19.000' is not a valid value of the atomic type 'timeStampType'. So it looks like my regular expression must be faulty. But when I try to validate the regular expression at the command line, it passes: >>> import re >>> pat = '(0[0-9]{1})|(1[0-2]{1})-(3[0-1]{1}|[0-2]{1}[0-9]{1})-[2-9]{1}[0-9]{3} ([0-1]{1}[0-9]{1}|2[0-3]{1}):[0-5]{1}[0-9]{1}:[0-5]{1}[0-9]{1}.[0-9]{3}' >>> assert re.match(pat, '04-21-2010 16:00:19.000') >>> I'm aware that XSD regular expressions don't have every feature, but the documentation I've found indicates that every feature that I'm using should work. So what am I mis-understanding, and why does my document fail?

    Read the article

  • C# Chain-of-responsibility with delegates

    - by nettguy
    For my understanding purpose i have implemented Chain-Of-Responsibility pattern. //Abstract Base Type public abstract class CustomerServiceDesk { protected CustomerServiceDesk _nextHandler; public abstract void ServeCustomers(Customer _customer); public void SetupHadler(CustomerServiceDesk _nextHandler) { this._nextHandler = _nextHandler; } } public class FrontLineServiceDesk:CustomerServiceDesk { public override void ServeCustomers(Customer _customer) { if (_customer.ComplaintType == ComplaintType.General) { Console.WriteLine(_customer.Name + " Complaints are registered ; will be served soon by FrontLine Help Desk.."); } else { Console.WriteLine(_customer.Name + " is redirected to Critical Help Desk"); _nextHandler.ServeCustomers(_customer); } } } public class CriticalIssueServiceDesk:CustomerServiceDesk { public override void ServeCustomers(Customer _customer) { if (_customer.ComplaintType == ComplaintType.Critical) { Console.WriteLine(_customer.Name + "Complaints are registered ; will be served soon by Critical Help Desk"); } else if (_customer.ComplaintType == ComplaintType.Legal) { Console.WriteLine(_customer.Name + "is redirected to Legal Help Desk"); _nextHandler.ServeCustomers(_customer); } } } public class LegalissueServiceDesk :CustomerServiceDesk { public override void ServeCustomers(Customer _customer) { if (_customer.ComplaintType == ComplaintType.Legal) { Console.WriteLine(_customer.Name + "Complaints are registered ; will be served soon by legal help desk"); } } } public class Customer { public string Name { get; set; } public ComplaintType ComplaintType { get; set; } } public enum ComplaintType { General, Critical, Legal } void Main() { CustomerServiceDesk _frontLineDesk = new FrontLineServiceDesk(); CustomerServiceDesk _criticalSupportDesk = new CriticalIssueServiceDesk(); CustomerServiceDesk _legalSupportDesk = new LegalissueServiceDesk(); _frontLineDesk.SetupHadler(_criticalSupportDesk); _criticalSupportDesk.SetupHadler(_legalSupportDesk); Customer _customer1 = new Customer(); _customer1.Name = "Microsoft"; _customer1.ComplaintType = ComplaintType.General; Customer _customer2 = new Customer(); _customer2.Name = "SunSystems"; _customer2.ComplaintType = ComplaintType.Critical; Customer _customer3 = new Customer(); _customer3.Name = "HP"; _customer3.ComplaintType = ComplaintType.Legal; _frontLineDesk.ServeCustomers(_customer1); _frontLineDesk.ServeCustomers(_customer2); _frontLineDesk.ServeCustomers(_customer3); } Question Without breaking the chain-of-responsibility ,how can i apply delegates and events to rewrite the code?

    Read the article

  • ArrayAdapter need to be clear even i am creating a new one

    - by Roi
    Hello I'm having problems understanding how the ArrayAdapter works. My code is working but I dont know how.(http://amy-mac.com/images/2013/code_meme.jpg) I have my activity, inside it i have 2 private classes.: public class MainActivity extends Activity { ... private void SomePrivateMethod(){ autoCompleteTextView.setAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_spinner_dropdown_item, new ArrayList<String>(Arrays.asList("")))); autoCompleteTextView.addTextChangedListener(new MyTextWatcher()); } ... private class MyTextWatcher implements TextWatcher { ... } private class SearchAddressTask extends AsyncTask<String, Void, String[]> { ... } } Now inside my textwatcher class i call the search address task: @Override public void afterTextChanged(Editable s) { new SearchAddressTask().execute(s.toString()); } So far so good. In my SearchAddressTask I do some stuff on doInBackground() that returns the right array. On the onPostExecute() method i try to just modify the AutoCompleteTextView adapter to add the values from the array obtained in doInBackground() but the adapter cannot be modified: NOT WORKING CODE: protected void onPostExecute(String[] addressArray) { ArrayAdapter<String> adapter = (ArrayAdapter<String>) autoCompleteDestination.getAdapter(); adapter.clear(); adapter.addAll(new ArrayList<String>(Arrays.asList(addressArray))); adapter.notifyDataSetChanged(); Log.d("SearchAddressTask", "adapter isEmpty : " + adapter.isEmpty()); // Returns true!!??! } I dont get why this is not working. Even if i run it on UI Thread... I kept investigating, if i recreate the arrayAdapter, is working in the UI (Showing the suggestions), but i still need to clear the old adapter: WORKING CODE: protected void onPostExecute(String[] addressArray) { ArrayAdapter<String> adapter = (ArrayAdapter<String>) autoCompleteDestination.getAdapter(); adapter.clear(); autoCompleteDestination.setAdapter(new ArrayAdapter<String>(NewDestinationActivity.this,android.R.layout.simple_spinner_dropdown_item, new ArrayList<String>(Arrays.asList(addressArray)))); //adapter.notifyDataSetChanged(); // no needed Log.d("SearchAddressTask", "adapter isEmpty : " + adapter.isEmpty()); // keeps returning true!!??! } So my question is, what is really happening with this ArrayAdapter? why I cannot modify it in my onPostExecute()? Why is working in the UI if i am recreating the adapter? and why i need to clear the old adapter then? I dont know there are so many questions that I need some help in here!! Thanks!!

    Read the article

  • Streaming webcam video in Flash using MP4 encoding

    - by Herms
    One of the features of the Flash app I'm working on is to be able to stream a webcam to others. We're just using the built-in webcam support in Flash and sending it through FMS. We've had some people ask for higher quality video, but we're already using the highest quality setting we can in Flash (setting quality to 100%). My understanding is that in the newer flash players they added support for MPEG-4 encoding for the videos. I created a simple test Flex app to try and compare the video quality of the MP4 vs FLV encodings. However, I can't seem to get MP4 to work at all. According to the Flex documentation the only thing I need to do to use MP4 instead of FLV is prepend "mp4:" to the name of the stream when calling publish: Specify the stream name as a string with the prefix mp4: with or without the filename extension. The prefix indicates to the server that the file contains H.264-encoded video and AAC-encoded audio within the MPEG-4 Part 14 container format. When I try this nothing happens. I don't get any events raised on the client side, no exceptions thrown, and my logging on the server side doesn't show any streams starting. Here's the relevant code: // These are all defined and created within the class. private var nc:NetConnection; private var sharing:Boolean; private var pubStream:NetStream; private var format:String; private var streamName:String; private var camera:Camera; // called when the user clicks the start button private function startSharing():void { if (!nc.connected) { return; } if (sharing) { return; } if(pubStream == null) { pubStream = new NetStream(nc); pubStream.attachCamera(camera); } startPublish(); sharing = true; } private function startPublish():void { var name:String; if (this.format == "mp4") { name = "mp4:" + streamName; } else { name = streamName; } //pubStream.publish(name, "live"); pubStream.publish(name, "record"); }

    Read the article

  • Network communication for a turn based board game

    - by randooom
    Hi all, my first question here, so please don't be to harsh if something went wrong :) I'm currently a CS student (from Germany, if this info is of any use ;) ) and we got a, free selectable, programming assignment, which we have to write in a C++/CLI Windows Forms Application. My team, two others and me, decided to go for a network-compatible port of the board game Risk. We divided the work in 3 Parts, namely UI, game logic and network. Now we're on the part where we have to get everything working together and the big question mark is, how to get the clients synchronized with each other? Our approach so far is, that each client has all information necessary to calculate and/or execute all possible actions. Actually the clients have all information available at all, aside from the game-initializing phase (add players, select map, etc.), which needs one "super-client" with some extra stuff to control things. This is the standard scenario of our approach: player performs action, the action is valid and got executed on the players client action is sent over the network action is executed on the other clients The design (i.e. no or code so far) we came up with so far, is something like the following pseudo sequence diagram. Gui, Controller and Network implement all possible actions (i.e. all actions which change data) as methods from an interface. So each part can implement the method in a way to get their job done. Example with Action(): On the player side's Client: Player-->Gui.Action() Gui-->Controller.Action() Controller-->Logic.Action (Logic.Action() == NoError)? Controller-->Network.Action() Network-->Parser.ParseAction() Network.Send(msg) On all other clients: Network.Recv(msg) Network-->Parser.Deparse(msg) Parser-->Logic.Action() Logic-->Gui.Action() The questions: Is this a viable approach to our task? Any better/easier way to this? Recommendations, critique? Our knowledge (so you can better target your answer): We are on the beginner side, in regards to programming on a somewhat larger projects with a small team. All of us have some general programming experience and basic understanding of the .Net Libraries and Windows Forms. If you need any further information, please feel free to ask.

    Read the article

  • not sure if this is my mistake ~~ vs2010 Add Service Reference fails

    - by gerryLowry
    https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter the above is from Taleo's API guide. I'm trying to create a WCF Client (e.g.: " Creating Your First WCF Client" http://channel9.msdn.com/shows/Endpoint/Endpoint-Screencasts-Creating-Your-First-WCF-Client/ ) The tbe.taleo... link is from Taleo's API documentation. Likely my understanding is flawed. My assumption is that when the link from Taleo is entered into the vs2010 "Add Service Reference" dialog and GO is clicked, then vs2010 should retrieve a proper WSDL/SOAP envelope back from the Taleo link. That does not happen; instead an error occurs. Fiddler2 (http://fiddler2.com) displays the status code 500 "HTTP/1.1 500 Internal Server Error". [FULL DETAILS BELOW] "WcfTestClient.exe" gives a similar error: [WcfTestClient DETAILS BELOW] QUESTION: is it me, or is the Taleo link flawed? Thank you, Gerry [FULL DETAILS "Add Service Reference"] The HTML document does not contain Web service discovery information. Metadata contains a reference that cannot be resolved: 'https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter'. The content type text/xml;charset=utf-8 of the response message does not match the content type of the binding (application/soap+xml; charset=utf-8). If using a custom encoder, be sure that the IsContentTypeSupported method is implemented properly. The first 544 bytes of the response were: ' SOAP-ENV:Protocol Unsupported content type "application/soap+xml; charset=utf-8", must be: "text/xml". /MANAGER/dispatcher/servlet/rpcrouter '. The remote server returned an error: (500) Internal Server Error. If the service is defined in the current solution, try building the solution and adding the service reference again. [WcfTestClient DETAILS] Error: Cannot obtain Metadata from https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address. For help enabling metadata publishing, please refer to the MSDN documentation at http://go.microsoft.com/fwlink/?LinkId=65455.WS-Metadata Exchange Error URI: https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter Metadata contains a reference that cannot be resolved: 'https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter'. The content type text/xml;charset=utf-8 of the response message does not match the content type of the binding (application/soap+xml; charset=utf-8). If using a custom encoder, be sure that the IsContentTypeSupported method is implemented properly. The first 544 bytes of the response were: 'SOAP-ENV:ProtocolUnsupported content type "application/soap+xml; charset=utf-8", must be: "text/xml"./MANAGER/dispatcher/servlet/rpcrouter'. The remote server returned an error: (500) Internal Server Error.HTTP GET Error URI: https://tbe.taleo.net/MANAGER/dispatcher/servlet/rpcrouter The HTML document does not contain Web service discovery information.

    Read the article

  • Linq to SQL Repository ~theory~ - Generic but now uses Linq to Objects?

    - by Matt Tolliday
    The project I am currently working on used Linq to SQL as an ORM data access technology. Its an MVC3 Web app. The problem I faced was primarily due to the inability to mock (for testing) the DataContext which gets autogenerated by the DBML designer. So to solve this issue (after much reading) I refactored the repository system which was in place - single repository with seperate and duplicated access methods for each table which ended up with something like 300 methods only 10 of which were unique - into a single repository with generic methods taking the table and returning more generic types to the upper reaches of the application. My question revolves more around the design I've used to get thus far and the differences I'm noticing in the structure of the app. 1) Having refactored the code from the dark ages which used classic Linq to SQL queries: public Billing GetBilling(int id) { var result = ( from bil in _bicDc.Billings where bil.BillingId == id select bil).SingleOrDefault(); return (result); } it now looks like: public T GetRecordWhere<T>(Expression<Func<T, bool>> predicate) where T : class { T result; try { result = _dataContext.GetTable<T>().Where(predicate).SingleOrDefault(); } catch (Exception ex) { throw ex; } return result; } and is used by the controller with a query along the lines of: _repository.GetRecordWhere<Billing>(x => x.BillingId == 1); which is fine, and precisely what I wanted to achieve. ...however.... I'm also having to do the following to get precisely the result set i require in the controller class (the highest point of the app in essence)... viewModel.RecentRequests = _model.GetAllRecordsWhere<Billing>(x => x.BillingId == 1) .Where(x => x.BillingId == Convert.ToInt32(BillingType.Submitted)) .OrderByDescending(x => x.DateCreated). Take(5).ToList(); This - as far as my understanding is correct - is now using Linq to Objects rather than the Linq to SQL queries I was previously? Is this okay practise? It feels wrong to me but I dont know why. Probably because the logic of the queries is in the very highest tier of the app, rather than the lowest, but... I defer to you good people for advice. One of the issues I considered was bringing the entire table into memory but I understand that using the Iqeryable return type the where clause is taken to the database and evaluated there. Thus returning only the resultset i require... i may be wrong. And if you've made it this far, well done. Thank you, and if you have any advice it is very much appreciated!!

    Read the article

  • MVC 2 AntiForgeryToken - Why symmetric encryption + IPrinciple?

    - by Brad R
    We recently updated our solution to MVC 2, and this has updated the way that the AntiForgeryToken works. Unfortunately this does not fit with our AJAX framework any more. The problem is that MVC 2 now uses symmetric encryption to encode some properties about the user, including the user's Name property (from IPrincipal). We are able to securely register a new user using AJAX, after which subsequent AJAX calls will be invalid as the anti forgery token will change when the user has been granted a new principal. There are also other cases when this may happen, such as a user updating their name etc. My main question is why does MVC 2 even bother using symmetric encryption? Any then why does it care about the user name property on the principal? If my understanding is correct then any random shared secret will do. The basic principle is that the user will be sent a cookie with some specific data (HttpOnly!). This cookie is then required to match a form variable sent back with each request that may have side effects (POST's usually). Since this is only meant to protect from cross site attacks it is easy to craft up a response that would easily pass the test, but only if you had full access to the cookie. Since a cross site attacker is not going to have access to your user cookies you are protected. By using symmetric encryption, what is the advantage in checking the contents of the cookie? That is, if I already have sent an HttpOnly cookie the attacker cannot override it (unless a browser has a major security issue), so why do I then need to check it again? After having a think about it it appears to be one of those 'added layer of security' cases - but if your first line of defence has fallen (HttpOnly) then the attacker is going to get past the second layer anyway as they have full access to the users cookie collection, and could just impersonate them directly, instead of using an indirect XSS/CSRF attack. Of course I could be missing a major issue, but I haven't found it yet. If there are some obvious or subtle issues at play here then I would like to be aware of them.

    Read the article

  • Would Apache running on port 8080 prevent dynamically loaded scripts in JavaScript?

    - by editor
    Had a nice PHP/HTML/JS prototype working on my personal Linode, then tried to throw it into a work machine. The page adds a script tag dynamically with some JavaScript. It's a bunch of Google charts that update based on different timeslices. That code looks something like this: // jQuery $.post to send the beginning and end timestamps $.post("channel_functions.php", data_to_post, function(data){ // the data that's returned is the javascript I want to load var script = document.createElement('script'); var head= document.getElementsByTagName('head')[0]; var script= document.createElement('script'); var text = document.createTextNode(data); script.type= 'text/javascript'; script.id = 'chart_data'; script.appendChild(text); // Adding script tag to page head.appendChild(script); // Call the function I know were present in the script tag loadTheCharts(); }); function loadTheCharts() { // These are the functions that were loaded dynamically // By this point the script tag is supposed be loaded, added and eval'd function1(); function2(); } Function1() and function2() don't exist until they get added to the dom, but I don't call loadTheCharts() until after the $.post has run so this doesn't seem to be a problem. I'm one of those dirty PHP coders you mother warned you about, so I'm not well versed in JavaScript beyond what I've read in the typical go-to O'Reilly books. But this code worked fine on my personal dev server, so I'm wondering why it wouldn't work on this new machine. The only difference in setup, from what I can tell, is that the new machine is running on port 8080, so it's 192.168.blah.blah:8080/index.php instead of nicedomain.com/index.php. I see the code was indeed added to the dom when I use webmaster tools to "view generated source" but in Firebug I get an error like "function2() is undefined" even though my understanding was that all script tags are eval'ed when added to . My question: Given what I've laid out, and that the machine is running on :8080, is there a reason anyone can think of as to why a dynamically loaded function like function2() would be defined on the Linode and not on the machine running Apache on 8080?

    Read the article

  • IE and Content-disposition inline vs. extension-token

    - by pinkgothic
    Preamble So IE does Mime-Type sniffing. That part's old news. Suggestions of how to combat it tend to be along the lines of 'supply a content-type IE trusts' (i.e. anything that isn't text/plain or application/octet-stream) or 'add extraneous data at the start of the file that is definitely of the type you're serving'. Now, I'm working on an application that has to allow message attachments (like in e-mails), and we want to close up XSS vectors. IE's mime sniffing is one of those vectors - a text/plain file with html content will trigger as html. Recoding isn't an option at this point, changing the attachments the user has provided can only happen if there is absolutely no doubt about the maliciousness of the file - and someone might want to send HTML as text. Now, Microsoft's MSDN article implies the situation might be easier to fix than advertised: If Internet Explorer knows the Content-Type specified and there is no Content-Disposition data, Internet Explorer performs a "MIME sniff," [...] Great! Except I don't have IE nor current means to reliably install it (I realise this is a fairly sad state for a webdeveloper to be in, I hope to fix this soon) and this is grey theory that I can't quite seem to get confirmed one way or the other. Local sources say that line is hogwash - IE will mime sniff anything that is Content-Disposition: inline / <default> and not specific enough for its tastes in -Type. But what about x-* ('extension-token' in the RFC)? Trying to google for how browsers handle Content-Disposition: <extension-token> hasn't yielded anything (though I may just be doing it wrong, my understanding of Google is seriously slipping lately). I found one question that looked promising, but turned out to be a misunderstanding on side of the thread author, meaning that the train of thought was never actually addressed there. Question(s) Does IE really Mime sniff if you expressly pass Content-Disposition: inline? If so: Does anyone here know how browsers handle Content-Disposition: <extension-token>? If they do this in a way that is for my purposes benign, by presuming it to be synonymous with the default (effectively 'inline', though I hear it's not defined anywhere?), is it specific enough for IE not to Mime sniff? Or am I actually shooting myself in the foot by thinking of pursuing this avenue?

    Read the article

  • MSBuild: Items + Batching + CreateItem + Transforms Question

    - by KeithCS
    I have this bit of an msbuild project that is making me wonder why it the outcome is the way it is. Not that it is causing an issue or anything of the sort, but I would like to try and better my understanding of it. <?xml version="1.0" encoding="utf-8" ?> <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="TestTarget1;TestTarget2" ToolsVersion="3.5"> <ItemGroup> <PathDir Include="C:\RootDir\UniqueDir1"/> <PathDir Include="C:\RootDir\UniqueDir2" /> </ItemGroup> <Target Name="TestTarget1" Outputs="%(PathDir.Identity)"> <PropertyGroup> <RootPath>%(PathDir.Identity)</RootPath> </PropertyGroup> <ItemGroup> <SubDirectory Include="Common1"/> <SubDirectory Include="Common2"/> </ItemGroup> <CreateItem Include="@(SubDirectory->'$(RootPath)\%(Identity)')"> <Output TaskParameter="Include" ItemName="FullPath"/> </CreateItem> <Message Text="@(FullPath)"/> </Target> <Target Name="TestTarget2"> <Message Text="@(FullPath)"/> </Target> </Project> So I have two main paths that are unique, and within each I have two directories with the same names in each of the unique paths. In target1, I am batching against the identity of the items in PathDir, and then performing a transform on item SubDirectory, which contains the common folder names found in the unique directories, to create a new item containing the full paths. So anyways, after that, the output for the targets is as follows: Target 1: C:\RootDir\UniqueDir1\Common1;C:\RootDir\UniqueDir1\Common2 C:\RootDir\UniqueDir2\Common1;C:\RootDir\UniqueDir2\Common2 Target 2: C:\RootDir\UniqueDir1\Common1;C:\RootDir\UniqueDir1\Common2;C:\RootDir\UniqueDir2\Common1;C:\RootDir\UniqueDir2\Common2 So my question I guess is ... why does target1 only display the directories containing the directory it is batching against? I know it probably has to do with batching, but thats all I know.

    Read the article

  • PyOpenGL - passing transformation matrix into shader

    - by M-V
    I am having trouble passing projection and modelview matrices into the GLSL shader from my PyOpenGL code. My understanding is that OpenGL matrices are column major, but when I pass in projection and modelview matrices as shown, I don't see anything. I tried the transpose of the matrices, and it worked for the modelview matrix, but the projection matrix doesn't work either way. Here is the code: import OpenGL from OpenGL.GL import * from OpenGL.GL.shaders import * from OpenGL.GLU import * from OpenGL.GLUT import * from OpenGL.GLUT.freeglut import * from OpenGL.arrays import vbo import numpy, math, sys strVS = """ attribute vec3 aVert; uniform mat4 uMVMatrix; uniform mat4 uPMatrix; uniform vec4 uColor; varying vec4 vCol; void main() { // option #1 - fails gl_Position = uPMatrix * uMVMatrix * vec4(aVert, 1.0); // option #2 - works gl_Position = vec4(aVert, 1.0); // set color vCol = vec4(uColor.rgb, 1.0); } """ strFS = """ varying vec4 vCol; void main() { // use vertex color gl_FragColor = vCol; } """ # particle system class class Scene: # initialization def __init__(self): # create shader self.program = compileProgram(compileShader(strVS, GL_VERTEX_SHADER), compileShader(strFS, GL_FRAGMENT_SHADER)) glUseProgram(self.program) self.pMatrixUniform = glGetUniformLocation(self.program, 'uPMatrix') self.mvMatrixUniform = glGetUniformLocation(self.program, "uMVMatrix") self.colorU = glGetUniformLocation(self.program, "uColor") # attributes self.vertIndex = glGetAttribLocation(self.program, "aVert") # color self.col0 = [1.0, 1.0, 0.0, 1.0] # define quad vertices s = 0.2 quadV = [ -s, s, 0.0, -s, -s, 0.0, s, s, 0.0, s, s, 0.0, -s, -s, 0.0, s, -s, 0.0 ] # vertices self.vertexBuffer = glGenBuffers(1) glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer) vertexData = numpy.array(quadV, numpy.float32) glBufferData(GL_ARRAY_BUFFER, 4*len(vertexData), vertexData, GL_STATIC_DRAW) # render def render(self, pMatrix, mvMatrix): # use shader glUseProgram(self.program) # set proj matrix glUniformMatrix4fv(self.pMatrixUniform, 1, GL_FALSE, pMatrix) # set modelview matrix glUniformMatrix4fv(self.mvMatrixUniform, 1, GL_FALSE, mvMatrix) # set color glUniform4fv(self.colorU, 1, self.col0) #enable arrays glEnableVertexAttribArray(self.vertIndex) # set buffers glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer) glVertexAttribPointer(self.vertIndex, 3, GL_FLOAT, GL_FALSE, 0, None) # draw glDrawArrays(GL_TRIANGLES, 0, 6) # disable arrays glDisableVertexAttribArray(self.vertIndex) class Renderer: def __init__(self): pass def reshape(self, width, height): self.width = width self.height = height self.aspect = width/float(height) glViewport(0, 0, self.width, self.height) glEnable(GL_DEPTH_TEST) glDisable(GL_CULL_FACE) glClearColor(0.8, 0.8, 0.8,1.0) glutPostRedisplay() def keyPressed(self, *args): sys.exit() def draw(self): glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) # build projection matrix fov = math.radians(45.0) f = 1.0/math.tan(fov/2.0) zN, zF = (0.1, 100.0) a = self.aspect pMatrix = numpy.array([f/a, 0.0, 0.0, 0.0, 0.0, f, 0.0, 0.0, 0.0, 0.0, (zF+zN)/(zN-zF), -1.0, 0.0, 0.0, 2.0*zF*zN/(zN-zF), 0.0], numpy.float32) # modelview matrix mvMatrix = numpy.array([1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.5, 0.0, -5.0, 1.0], numpy.float32) # render self.scene.render(pMatrix, mvMatrix) # swap buffers glutSwapBuffers() def run(self): glutInitDisplayMode(GLUT_RGBA) glutInitWindowSize(400, 400) self.window = glutCreateWindow("Minimal") glutReshapeFunc(self.reshape) glutDisplayFunc(self.draw) glutKeyboardFunc(self.keyPressed) # Checks for key strokes self.scene = Scene() glutMainLoop() glutInit(sys.argv) prog = Renderer() prog.run() When I use option #2 in the shader without either matrix, I get the following output: What am I doing wrong?

    Read the article

  • XML Return from an Oracle Stored Procedure

    - by Tequila Jinx
    Unfortunately most of my DB experience has been with MSSQL which tends to hold your hand a lot more than Oracle. What I'm trying to do is fairly trivial in tSQL, however, pl/sql is giving me a headache. I have the following procedure: CREATE OR REPLACE PROCEDURE USPX_GetUserbyID (USERID USERS.USERID%TYPE, USERRECORD OUT XMLTYPE) AS BEGIN SELECT XMLELEMENT("user" , XMLATTRIBUTES(u.USERID AS "userid", u.companyid as "companyid", u.usertype as "usertype", u.status as "status", u.personid as "personid") , XMLFOREST( p.FIRSTNAME AS "firstname" , p.LASTNAME AS "lastname" , p.EMAIL AS "email" , p.PHONE AS "phone" , p.PHONEEXTENSION AS "extension") , XMLELEMENT("roles", (SELECT XMLAGG(XMLELEMENT("role", r.ROLETYPE)) FROM USER_ROLES r WHERE r.USERID = USERID AND r.ISACTIVE = 1 ) ) , XMLELEMENT("watches", (SELECT XMLAGG( XMLELEMENT("watch", XMLATTRIBUTES(w.WATCHID AS "id", w.TICKETID AS "ticket") ) ) FROM USER_WATCHES w WHERE w.USERID = USERID AND w.ISACTIVE = 1 ) ) ) AS "RESULT" INTO USERRECORD FROM USERS u LEFT JOIN PEOPLE p ON p.PERSONID = u.PERSONID WHERE u.USERID = USERID; END USPX_GetUserbyID; When executed, it should return an XML document with the following structure: <user userid="" companyid="" usertype="" status="" personid=""> <firstname /> <lastname /> <email /> <phone /> <extension /> <roles> <role /> </roles> <watches> <watch id="" ticket="" /> </watches> </user> When I execute the query itself, replacing the USERID parameter with a string and removing the "into" clause, the query runs fine and returns the expected structure. However, when the procedure attempts to execute the query, passing the results of the XMLELEMENT function into the USERRECORD output parameter, I get the following exception: Error report: ORA-01422: exact fetch returns more than requested number of rows ORA-06512: at "USPX_GETUSERBYID", line 4 ORA-06512: at line 3 01422. 00000 - "exact fetch returns more than requested number of rows" *Cause: The number specified in exact fetch is less than the rows returned. *Action: Rewrite the query or change number of rows requested I'm baffled trying to nail this down, and unfortunately my google-fu hasn't helped. I've found plenty of Oracle SQL|XML examples, but none that deal with XML returns from a procedure. Note: I know that an alternate method of retrieving XML using DBMS methods exists, however, it's my understanding that that functionality is deprecated in favor of SQL|XML.

    Read the article

  • Any tool(s) for knowing the layout (segments) of running process in Windows?

    - by claws
    I've always been curious about How exactly the process looks in memory? What are the different segments(parts) in it? How exactly will be the program (on the disk) & process (in the memory) are related? My previous question: http://stackoverflow.com/questions/1966920/more-info-on-memory-layout-of-an-executable-program-process In my quest, I finally found a answer. I found this excellent article that cleared most of my queries: http://www.linuxforums.org/articles/understanding-elf-using-readelf-and-objdump_125.html In the above article, author shows how to get different segments of the process (LINUX) & he compares it with its corresponding ELF file. I'm quoting this section here: Courious to see the real layout of process segment? We can use /proc//maps file to reveal it. is the PID of the process we want to observe. Before we move on, we have a small problem here. Our test program runs so fast that it ends before we can even dump the related /proc entry. I use gdb to solve this. You can use another trick such as inserting sleep() before it calls return(). In a console (or a terminal emulator such as xterm) do: $ gdb test (gdb) b main Breakpoint 1 at 0x8048376 (gdb) r Breakpoint 1, 0x08048376 in main () Hold right here, open another console and find out the PID of program "test". If you want the quick way, type: $ cat /proc/`pgrep test`/maps You will see an output like below (you might get different output): [1] 0039d000-003b2000 r-xp 00000000 16:41 1080084 /lib/ld-2.3.3.so [2] 003b2000-003b3000 r--p 00014000 16:41 1080084 /lib/ld-2.3.3.so [3] 003b3000-003b4000 rw-p 00015000 16:41 1080084 /lib/ld-2.3.3.so [4] 003b6000-004cb000 r-xp 00000000 16:41 1080085 /lib/tls/libc-2.3.3.so [5] 004cb000-004cd000 r--p 00115000 16:41 1080085 /lib/tls/libc-2.3.3.so [6] 004cd000-004cf000 rw-p 00117000 16:41 1080085 /lib/tls/libc-2.3.3.so [7] 004cf000-004d1000 rw-p 004cf000 00:00 0 [8] 08048000-08049000 r-xp 00000000 16:06 66970 /tmp/test [9] 08049000-0804a000 rw-p 00000000 16:06 66970 /tmp/test [10] b7fec000-b7fed000 rw-p b7fec000 00:00 0 [11] bffeb000-c0000000 rw-p bffeb000 00:00 0 [12] ffffe000-fffff000 ---p 00000000 00:00 0 Note: I add number on each line as reference. Back to gdb, type: (gdb) q So, in total, we see 12 segment (also known as Virtual Memory Area--VMA). But I want to know about Windows Process & PE file format. Any tool(s) for getting the layout (segments) of running process in Windows? Any other good resources for learning more on this subject? EDIT: Are there any good articles which shows the mapping between PE file sections & VA segments?

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >