Search Results

Search found 10494 results on 420 pages for 'beyond the documentation'.

Page 360/420 | < Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >

  • [Cocoa] Placing an NSTimer in a separate thread

    - by ndg
    I'm trying to setup an NSTimer in a separate thread so that it continues to fire when users interact with the UI of my application. This seems to work, but Leaks reports a number of issues - and I believe I've narrowed it down to my timer code. Currently what's happening is that updateTimer tries to access an NSArrayController (timersController) which is bound to an NSTableView in my applications interface. From there, I grab the first selected row and alter its timeSpent column. From reading around, I believe what I should be trying to do is execute the updateTimer function on the main thread, rather than in my timers secondary thread. I'm posting here in the hopes that someone with more experience can tell me if that's the only thing I'm doing wrong. Having read Apple's documentation on Threading, I've found it an overwhelmingly large subject area. NSThread *timerThread = [[[NSThread alloc] initWithTarget:self selector:@selector(startTimerThread) object:nil] autorelease]; [timerThread start]; -(void)startTimerThread { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSRunLoop *runLoop = [NSRunLoop currentRunLoop]; activeTimer = [[NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(updateTimer:) userInfo:nil repeats:YES] retain]; [runLoop run]; [pool release]; } -(void)updateTimer:(NSTimer *)timer { NSArray *selectedTimers = [timersController selectedObjects]; id selectedTimer = [selectedTimers objectAtIndex:0]; NSNumber *currentTimeSpent = [selectedTimer timeSpent]; [selectedTimer setValue:[NSNumber numberWithInt:[currentTimeSpent intValue]+1] forKey:@"timeSpent"]; } -(void)stopTimer { [activeTimer invalidate]; [activeTimer release]; }

    Read the article

  • Somewhat lost with jquery + php + json

    - by Luis Armando
    I am starting to use the jquery $.ajax() but I can't get back what I want to...I send this: $(function(){ $.ajax({ url: "graph_data.php", type: "POST", data: "casi=56&nada=48&nuevo=98&perfecto=100&vales=50&apenas=70&yeah=60", dataType: "json", error: function (xhr, desc, exceptionobj) { document.writeln("El error de XMLHTTPRequest dice: " + xhr.responseText); }, success: function (json) { if (json.error) { alert(json.error); return; } var output = ""; for (p in json) { output += p + " : " + json[p] + "\n"; } document.writeln("Results: \n\n" + output); } }); }); and my php is: <?php $data = $_POST['data']; function array2json($data){ $json = $data; return json_encode($json); } ?> and when I execute this I come out with: Results: just like that I used to have in the php a echo array2json statement but it just gave back gibberish...I really don't know what am I doing wrong and I've googled for about 3 hours just getting basically the same stuff. Also I don't know how to pass parameters to the "data:" in the $.ajax function in another way like getting info from the web page, can anyone please help me? Edit I did what you suggested and it prints the data now thank you very much =) however, I was wondering, how can I send the data to the "data:" part in jQuery so it takes it from let's say user input, also I was checking the php documentation and it says I'm allowed to write something like: json_encode($a,JSON_HEX_TAG|JSON_HEX_APOS|JSON_HEX_QUOT|JSON_HEX_AMP) however, if I do that I get an error saying that json_encode accepts 1 parameter and I'm giving 2...any idea why? I'm using php 5.2

    Read the article

  • Why is TRest in Tuple<T1... TRest> not constrained?

    - by Anthony Pegram
    In a Tuple, if you have more than 7 items, you can provide an 8th item that is another tuple and define up to 7 items, and then another tuple as the 8th and on and on down the line. However, there is no constraint on the 8th item at compile time. For example, this is legal code for the compiler: var tuple = new Tuple<int, int, int, int, int, int, int, double> (1, 1, 1, 1, 1, 1, 1, 1d); Even though the intellisense documentation says that TRest must be a Tuple. You do not get any error when writing or building the code, it does not manifest until runtime in the form of an ArgumentException. You can roughly implement a Tuple in a few minutes, complete with a Tuple-constrained 8th item. I just wonder why it was left off the current implementation? Is it possibly a forward-compatibility issue where they could add more elements with a hypothetical C# 5? Short version of rough implementation interface IMyTuple { } class MyTuple<T1> : IMyTuple { public T1 Item1 { get; private set; } public MyTuple(T1 item1) { Item1 = item1; } } class MyTuple<T1, T2> : MyTuple<T1> { public T2 Item2 { get; private set; } public MyTuple(T1 item1, T2 item2) : base(item1) { Item2 = item2; } } class MyTuple<T1, T2, TRest> : MyTuple<T1, T2> where TRest : IMyTuple { public TRest Rest { get; private set; } public MyTuple(T1 item1, T2 item2, TRest rest) : base(item1, item2) { Rest = rest; } } ... var mytuple = new MyTuple<int, int, MyTuple<int>>(1, 1, new MyTuple<int>(1)); // legal var mytuple2 = new MyTuple<int, int, int>(1, 2, 3); // illegal

    Read the article

  • SWIG interface to receive an opaque struct reference in Java through function argument

    - by Beeo
    I am trying to use SWIG in order to use the Spotify API (libspotify) for Android: https://developer.spotify.com/technologies/libspotify/ I am having trouble defining the SWIG interface file to be able to successfully call the following native C function: sp_error sp_session_create(const sp_session_config * config, sp_session ** sess); Which in C would be called like this: //config struct defined previously sp_session *sess; sp_session_create(&config, &sess); But in Java I would need to call it like this: //config object defined previously sp_session javaSess = new sp_session(); sp_session_create(config, javaSess); sp_session is an opaque struct and is only defined in libspotify's API.h file as: typedef struct sp_session sp_session; I'm expecting the libspotify library to create it and give me a reference to it. The only thing I need that reference for then is to pass to other functions in the API. I believe the answer lies within the SWIG interface and typemaps, but I have been unsuccessful in trying to apply the examples I found in the documentation: http://www.swig.org/Doc2.0/SWIGDocumentation.html#Java_struct_pointer_pointer `http://www.swig.org/Doc2.0/SWIGDocumentation.html#Java_using_typemaps_return_arguments Help!

    Read the article

  • C++ wxWidgets Event (Focus) Handling

    - by Wallter
    Due to comments I added the following code (in BasicPanel) Connect(CTRL_ONE, wxEVT_KILL_FOCUS, (wxObjectEventFunction)&BasicPanel::OnKillFocus); Connect(CTRL_TWO,wxEVT_KILL_FOCUS, (wxObjectEventFunction)&BasicPanel::OnKillFocus); Connect(CTRL_THREE, wxEVT_KILL_FOCUS, (wxObjectEventFunction)&BasicPanel::OnKillFocus); Connect(CTRL_FOUR, wxEVT_KILL_FOCUS, (wxObjectEventFunction)&BasicPanel::OnKillFocus); Connect(CTRL_FIVE, wxEVT_KILL_FOCUS, (wxObjectEventFunction)&BasicPanel::OnKillFocus); (enums) CTRL_NAME = wxID_HIGHEST + 5, // 6004 CTRL_ADDRESS = wxID_HIGHEST + 6, // 6005 CTRL_PHONENUMBER = wxID_HIGHEST + 7, // 6006 CTRL_SS = wxID_HIGHEST + 8, // 6007 CTRL_EMPNUMBER = wxID_HIGHEST + 9 // 6008 (The OnKillFocus Function - the declaration is included as suggested) void BasicPanel::OnKillFocus(wxFocusEvent& event) { switch (event.GetId()) { case 6004: ... break; ... ... ... } All of these added to the code do nothing when the user changes focus on which text box they are using... Q1:I am using wxWidgets (C++) and have come accost a problem that i can not locate any help. I have created several wxTextCtrl boxes and would like the program to update the simple calculations in them when the user 'kills the focus.' I could not find any documentation on this subject on the wxWidgets webpage and Googling it only brought up wxPython. The two events i have found are: EVT_COMMAND_KILL_FOCUS - EVT_KILL_FOCUS for neither of which I could find any snippet for. Could anyone give me a short example or lead me to a page that would be helpful? Q2:Would i have to create an event to handle the focus being killed for each of my 8 wxTextCtrl boxes? In the case that i have to create a different event: How would i get each event to differentiate from each other? I know i will have to create new wxID's for each of the wxTextCtrl boxes but how do I get the correct one to be triggered? class BasicPanel : public wxPanel { ... wxTextCtrl* one; wxTextCtrl* two; wxTextCtrl* three; wxTextCtrl* four; ... }

    Read the article

  • How to paginate Django with other get variables?

    - by vagabond
    I am having problems using pagination in Django. Take the URL below as an example: http://127.0.0.1:8000/users/?sort=first_name On this page I sort a list of users by their first_name. Without a sort GET variable it defaults to sort by id. Now if I click the next link I expect the following URL: http://127.0.0.1:8000/users/?sort=first_name&page=2 Instead I lose all get variables and end up with http://127.0.0.1:8000/users/?page=2 This is a problem because the second page is sorted by id instead of first_name. If I use request.get_full_path I will eventually end up with an ugly URL: http://127.0.0.1:8000/users/?sort=first_name&page=2&page=3&page=4 What is the solution? Is there a way to access the GET variables on the template and replace the value for the page? I am using pagination as described in Django's documentation and my preference is to keep using it. The template code I am using is similar to this: {% if contacts.has_next %} <a href="?page={{ contacts.next_page_number }}">next</a> {% endif %}

    Read the article

  • Fetching just the Key/id from a ReferenceProperty in App Engine

    - by ozone
    Hi SO, I could use a little help in AppEngine land... Using the [Python] API I create relationships like this example from the docs: class Author(db.Model): name = db.StringProperty() class Story(db.Model): author = db.ReferenceProperty(Author) story = db.get(story_key) author_name = story.author.name As I understand it, that example will make two datastore queries. One to fetch the Story and then one to deference the Author inorder to access the name. But I want to be able to fetch the id, so do something like: story = db.get(story_key) author_id = story.author.key().id() I want to just get the id from the reference. I do not want to have to deference (therefore query the datastore) the ReferenceProperty value. From reading the documentation it says that the value of a ReferenceProperty is a Key Which leads me to think that I could just call .id() on the reference's value. But it also says: The ReferenceProperty model provides features for Key property values such as automatic dereferencing. I can't find anything that explains when this referencing takes place? Is it safe to call .id() on the ReferenceProperty's value? Can it be assumed that calling .id() will not cause a datastore lookup?

    Read the article

  • Boost Asio UDP retrieve last packet in socket buffer

    - by Alberto Toglia
    I have been messing around Boost Asio for some days now but I got stuck with this weird behavior. Please let me explain. Computer A is sending continuos udp packets every 500 ms to computer B, computer B desires to read A's packets with it own velocity but only wants A's last packet, obviously the most updated one. It has come to my attention that when I do a: mSocket.receive_from(boost::asio::buffer(mBuffer), mEndPoint); I can get OLD packets that were not processed (almost everytime). Does this make any sense? A friend of mine told me that sockets maintain a buffer of packets and therefore If I read with a lower frequency than the sender this could happen. ¡? So, the first question is how is it possible to receive the last packet and discard the ones I missed? Later I tried using the async example of the Boost documentation but found it did not do what I wanted. http://www.boost.org/doc/libs/1_36_0/doc/html/boost_asio/tutorial/tutdaytime6.html From what I could tell the async_receive_from should call the method "handle_receive" when a packet arrives, and that works for the first packet after the service was "run". If I wanted to keep listening the port I should call the async_receive_from again in the handle code. right? BUT what I found is that I start an infinite loop, it doesn't wait till the next packet, it just enters "handle_receive" again and again. I'm not doing a server application, a lot of things are going on (its a game), so my second question is, do I have to use threads to use the async receive method properly, is there some example with threads and async receive? Thanks for you attention.

    Read the article

  • Boost::Asio - Remove the "null"-character in the end of tcp packets.

    - by shump
    I'm trying to make a simple msn client mostly for fun but also for educational purposes. And I started to try some tcp package sending and receiving using Boost Asio as I want cross-platform support. I have managed to send a "VER"-command and receive it's response. However after I send the following "CVR"-command, Asio casts an "End of file"-error. After some further researching I found by packet sniffing that my tcp packets to the messenger server got an extra "null"-character (Ascii code: 00) at the end of the message. This means that my VER-command gets an extra character in the end which I don't think the messenger server like and therefore shuts down the connection when I try to read the CVR response. This is how my package looks when sniffing it, (it's Payload): (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0a 0a 00 (Char:) VER 1 MSNP15 CVR 0... and this is how Adium(chat client for OS X)'s package looks: (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0d 0a (Char:) VER 1 MSNP15 CVR 0.. So my question is if there is any way to remove the null-character in the end of each package, of if I've misunderstood something and used Asio in a wrong way. My write function (slightly edited) looks lite this: int sendVERMessage() { boost::system::error_code ignored_error; char sendBuf[] = "VER 1 MSNP15 CVR0\r\n"; boost::asio::write(socket, boost::asio::buffer(sendBuf), boost::asio::transfer_all(), ignored_error); if(ignored_error) { cout << "Failed to send to host!" << endl; return 1; } cout << "VER message sent!" << endl; return 0; } And here's the main documentation on the msn protocol I'm using. Hope I've been clear enough.

    Read the article

  • How do I stop js files being cached in IE?

    - by DoctaJonez
    Hello stackers! I've created a page that uses the CKEditor javascript rich edit control. It's a pretty neat control, especially seeing as it's free, but I'm having serious issues with the way it allows you to add templates. To add a template you need to modify the templates js file in the CKEditor templates folder. The documentation page describing it is here. This works fine until I want to update a template or add a new one (or anything else that requires me to modify the js file). Internet Explorer caches the js file and doesn't pick up the update. Emptying the cache allows the update to be picked up, but this isn't an acceptable solution. Whenever I update a template I do not want to tell all of the users across the organisation to empty their IE cache. There must be a better way! Is there a way to stop IE caching the js file? Or is there another solution to this problem?

    Read the article

  • Is anyone familiar with SDPT.clsSDPT?

    - by David Stratton
    Normally I wouldn't ask this kind of question here, but I'm desperate at this point. I'm attempting to support a classic ASP app written by a predecessor who is no longer available. Keeping it short, several applications use a dll to perform encryption of sensitive data. This dll is named SDPT.dll, and the line of code used to create an object is set objSDPT = server.CreateObject("SDPT.clsSDPT") At this point, I am getting errors in a critical app on one of my servers, and I've actually hit a dead end. The error is a standard "Server.CreateObject Failed" message, which I know how to troubleshoot in most cases. However, in this case, all of my normal tries, plus several hours of Google searches are coming up with nothing that works. At this point, I'm not so much looking for help in troubleshooting the issue as I am in finding any sort of reference on this third party component. Even finding that is proving to be difficult, so I'm resorting to asking any of the seasoned developers that hang out here if they are familiar with this product, who it was developed by, and if any documentation on it exists anywhere.

    Read the article

  • Axis2 SOAP Envelope Header Information

    - by BigZig
    I'm consuming a web service that places an authentication token in the SOAP envelope header. It appears (through looking at the samples that came with the WS WSDL) that if the stub is generated in .NET, this header information is exposed through a member variable in the stub class. However, when I generate my Axis2 java stub using WSDL2Java it doesn't appear to be exposed anywhere. What is the correct way to extract this information from the SOAP envelope header? WSDL: http://www.vbar.com/zangelo/SecurityService.wsdl C# Sample: using System; using SignInSample.Security; // web service using SignInSample.Document; // web service namespace SignInSample { class SignInSampleClass { [STAThread] static void Main(string[] args) { // login to the Vault and set up the document service SecurityService secSvc = new SecurityService(); secSvc.Url = "http://localhost/AutodeskDM/Services/SecurityService.asmx"; secSvc.SecurityHeaderValue = new SignInSample.Security.SecurityHeader(); secSvc.SignIn("Administrator", "", "Vault"); DocumentServiceWse docSvc = new DocumentServiceWse(); docSvc.Url = "http://localhost/AutodeskDM/Services/DocumentService.asmx"; docSvc.SecurityHeaderValue = new SignInSample.Document.SecurityHeader(); docSvc.SecurityHeaderValue.Ticket = secSvc.SecurityHeaderValue.Ticket; docSvc.SecurityHeaderValue.UserId = secSvc.SecurityHeaderValue.UserId; } } } The sample illustrates what I'd like to do. Notice how the secSvc instance has a SecurityHeaderValue member variable that is populated after a successful secSvc.SignIn() invocation. Here's some relevant API documentation regarding the SignIn method: Although there is no return value, a successful sign in will populate the SecurityHeaderValue of the security service. The SecurityHeaderValue information is then used for other web service calls.

    Read the article

  • File streaming in PHP - How to replicate this C#.net code in PHP?

    - by openid_kenja
    I'm writing an interface to a web service where we need to upload configuration files. The documentation only provides a sample in C#.net which I am not familiar with. I'm trying to implement this in PHP. Can someone familiar with both languages point me in the right direction? I can figure out all the basics, but I'm trying to figure out suitable PHP replacements for the FileStream, ReadBytes, and UploadDataFile functions. I believe that the RecService object contains the URL for the web service. Thanks for your help! private void UploadFiles() { clientAlias = “<yourClientAlias>”; string filePath = “<pathToYourDataFiles>”; string[] fileList = {"Config.txt", "ProductDetails.txt", "BrandNames.txt", "CategoryNames.txt", "ProductsSoldOut.txt", "Sales.txt"}; RecommendClient RecService = new RecommendClient(); for (int i = 0; i < fileList.Length; i++) { bool lastFile = (i == fileList.Length - 1); //start generator after last file try { string fileName = filePath + fileList[i]; if (!File.Exists(fileName)) continue; // file not found } // set up a file stream and binary reader for the selected file and convert to byte array FileStream fStream = new FileStream(fileName, FileMode.Open, FileAccess.Read); BinaryReader br = new BinaryReader(fStream); byte[] data = br.ReadBytes((int)numBytes); br.Close(); // pass byte array to the web service string result = RecService.UploadDataFile(clientAlias, fileList[i], data, lastFile); fStream.Close(); fStream.Dispose(); } catch (Exception ex) { // log an error message } } }

    Read the article

  • When to drop an IT job

    - by Nippysaurus
    In my career I have had two programming jobs. Both these jobs were in a field that I am most familiar with (C# / MSSQL) but I have quit both jobs for the same reason: unmanageable code and bad (loose) company structure. There was something in common with both these jobs: small companies (in one I was the only developer). Currently I am in the following position: being given written instructions which are almost impossible to follow (somewhat of a fools errand). we are given short time constraints, but seldom asked how long work will take, and when we do it is always too long and needs to be shorter (and when it ends up taking longer than they need it to take, it's always our fault). there is no time for proper documenting, but we get blamed for not documenting (see previous point). Management is constantly screwing me around, saying I'm underperforming on a given task (which is not true, and switching me to a task which is much more confusing). So I must ask my fellow developers: how bad does a job need to be before you would consider jumping ship? And what to look out for when considering taking a job. In future I will be asking about documented procedures, release control, bug management and adoption of new technologies. EDIT: Let me add some more fuel to the fire ... I have been in my current job for just over a year, and the work I am doing almost never uses any of the knowledge I have gained from the other work I have been doing here. Everything is a giant learning curve. Because of this about 30% of my time is learning what is going on with this new product (who's owner / original developer has left the company), 30% trying to find the relevant documentation that helps the whole thing make sense, 30% actually finding where to make the change, 10% actually making the change.

    Read the article

  • How exactly do MbUnit's [Parallelizable] and DegreeOfParallelism work?

    - by BenA
    I thought I understood how MbUnit's parallel test execution worked, but the behaviour I'm seeing differs sufficiently much from my expectation that I suspect I'm missing something! I have a set of UI tests that I wish to run concurrently. All of the tests are in the same assembly, split across three different namespaces. All of the tests are completely independent of one another, so I'd like all of them to be eligible for parallel execution. To that end, I put the following in the AssemblyInfo.cs: [assembly: DegreeOfParallelism(8)] [assembly: Parallelizable(TestScope.All)] My understanding was that this combination of assembly attributes should cause all of the tests to be considered [Parallelizable], and that the test runner should use 8 threads during execution. My individual tests are marked with the [Test] attribute, and nothing else. None of them are data-driven. However, what I actually see is at most 5-6 threads being used, meaning that my test runs are taking longer than they should be. Am I missing something? Do I need to do anything else to ensure that all of my 8 threads are being used by the runner? N.B. The behaviour is the same irrespective of which runner I use. The GUI, command line and TD.Net runners all behave the same as described above, again leading me to think I've missed something. EDIT: As pointed out in the comments, I'm running v3.1 of MbUnit (update 2 build 397). The documentation suggests that the assembly level [parallelizable] attribute is available, but it does also seem to reference v3.2 of the framework despite that not yet being available. EDIT 2: To further clarify, the structure of my assembly is as follows: assembly - namespace - fixture - tests (each carrying only the [Test] attribute) - fixture - tests (each carrying only the [Test] attribute) - namespace - fixture - tests (each carrying only the [Test] attribute) - fixture - tests (each carrying only the [Test] attribute) - namespace - fixture - tests (each carrying only the [Test] attribute) - fixture - tests (each carrying only the [Test] attribute)

    Read the article

  • My QFileSystemModel doesn't work as expected in PyQt

    - by Skilldrick
    I'm learning the Qt Model/View architecture at the moment, and I've found something that doesn't work as I'd expect it to. I've got the following code (adapted from Qt Model Classes): from PyQt4 import QtCore, QtGui model = QtGui.QFileSystemModel() parentIndex = model.index(QtCore.QDir.currentPath()) print model.isDir(parentIndex) #prints True print model.data(parentIndex).toString() #prints name of current directory childIndex = model.index(0, 0, parentIndex) print model.data(childIndex).toString() rows = model.rowCount(parentIndex) print rows #prints 0 (even though the current directory has directory and file children) The question: Is this a problem with PyQt, have I just done something wrong, or am I completely misunderstanding QFileSystemModel? According to the documentation, model.rowCount(parentIndex) should return the number of children in the current directory. The QFileSystemModel docs say that it needs an instance of a Gui application, so I've also placed the above code in a QWidget as follows, but with the same result: import sys from PyQt4 import QtCore, QtGui class Widget(QtGui.QWidget): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) model = QtGui.QFileSystemModel() parentIndex = model.index(QtCore.QDir.currentPath()) print model.isDir(parentIndex) print model.data(parentIndex).toString() childIndex = model.index(0, 0, parentIndex) print model.data(childIndex).toString() rows = model.rowCount(parentIndex) print rows def main(): app = QtGui.QApplication(sys.argv) widget = Widget() widget.show() sys.exit(app.exec_()) if __name__ == '__main__': main()

    Read the article

  • JavaScript window object element properties

    - by Timothy
    A coworker showed me the following code and asked me why it worked. <span id="myspan">Do you like my hat?</span> <script type="text/javascript"> var spanElement = document.getElementById("myspan"); alert("Here I am! " + spanElement.innerHTML + "\n" + myspan.innerHTML); </script> I explained that a property is attached to the window object with the name of the element's id when the browser parses the document which then contains a reference to the appropriate dom node. It's sort of as if window.myspan = document.getElementById("myspan") is called behind the scenes as the page is being rendered. The ensuing discussion we had raised a few of questions: The window object and most of the DOM are not part of the official JavaScript/ECMA standards, but is the above behavior documented in any other official literature, perhaps browser-related? The above works in a browser (at least the main contenders) because there is a window object, but fails in something like rhino. Is writing code that relys on this considered bad practice because it makes too many assumptions about the execution environment? Are there any browsers in which the above would fail, or is this considered standard behavior across the board? Does anyone here know the answers to those questions and would be willing to enlighten me? I tried a quick internet search, but I admit I'm not sure how to even properly phrase the query. Pointers to references and documentation are welcome.

    Read the article

  • Reorganizing development environment for single developer/small shop

    - by Matthew
    I have been developing for my company for approximately three years. We serve up a web portal using Microsoft .NET and MS SQL Server on DotNetNuke. I am going to leave my job full time at the end of April. I am leaving on good terms, and I really care about this company and the state of the web project. Because I haven't worked in a team environment in a long time, I have probably lost touch with what 'real' setups look like. When I leave, I predict the company will either find another developer to take over, or at least have developers work on a contractual basis. Because I have not worked with other developers, I am very concerned with leaving the company (and the developer they hire) with a jumbled mess. I'd like to believe I am a good developer and everything makes sense, but I have no way to tell. My question, is how do I set up the development environment, so the company and the next developer will have little trouble getting started? What would you as a developer like in place before working on a project you've never worked on? Here's some relevant information: There is a development server onsite and a production server offsite in a data center . There is a server where backups and source code (Sourcegear Vault) are stored. There is no formal documentation but there are comments in the code. The company budget is tight so free suggestions will help the best. I will be around after the end of April on a consulting basis so I can ask simple questions but I will not be available full time to train someone

    Read the article

  • How to get to the key name of a referenced entity property from an entity instance without a datastore read in google app engine?

    - by Sumeet Pareek
    Consider I have the following models - class Team(db.Model): # say I have just 5 teams name = db.StringProperty() class Player(db.Model): # say I have thousands of players name = db.StringProperty() team = db.ReferenceProperty(Team, collection_name="player_set") Key name for each Team entity = 'team_' , and for each Player entity = 'player_' By some prior arrangement I have a Team entity's (key_name, name) mapping available to me. For example (team_01, United States Of America), (team_02, Russia) etc I have to show all the players and their teams on a page. One way of doing this would be - players = Player.all().fetch(1000) # This is 1 DB read for player in players: # This will iterate 1000 times self.response.out.write(player.name) # This is obviously not a DB read self.response.out.write(player.team.name) #This is a total of 1x1000 = 1000 DB reads That is a 1001 DB reads for a silly thing. The interesting part is that when I do a db.to_dict() on players, it shows that for every player in that list there is 'name' of the player and there is the 'key_name' of the team available too. So how can I do the below ?? players = Player.all().fetch(1000) # This is 1 DB read for player in players: # This will iterate 1000 times self.response.out.write(player.name) # This is obviously not a DB read self.response.out.write(team_list[player.<SOME WAY OF GETTING TEAM KEY NAME>]) # Here 'team_list' already has (key_name, name) for all 5 teams I have been struggling with this for a long time. Have read every available documentation. I could just hug the person that can help me here :-) Disclaimer: The above problem description is not a real scenario. It is a simplified arrangement that represents my problem exactly. I have run into it in a rater complex and big GAE appication.

    Read the article

  • django dynamically deduce SITE_ID according to the domain

    - by dcrodjer
    I am trying to develop a site which will render multiple customized sites according to the domain name (subdomain to be more precise). My all the domain names are redirected to the So for each site there will be a corresponding model which defines how the site should look (SITE - SITE_SETTINGS) What will be the best way to utilize the django sites framework to get the SITE_ID of the current site from the domain name instead of hard-coding it in the settings files (django sites documentation) and run database queries, render the views accordingly? If using multiple settings file is my only option can this (wsgi script handle domain name) be done? Update So finally, following lukes answer, what I will do is define a custom middleware which makes the views available with the important vars required according to the domain. And as far as sitemaps and comments is concerned, I will have to customize sitemaps app and a custom sites model on which the other models of sites will be based. And since the comments system is based on the hard-coded sitemap ID I can use it just as is on the models (models will already be filtered according to the site based on my sites framework) though the permalink feature will have to be customized. So a lot of customization. Please suggest if I am going wrong anywhere in this because I have to ensure that the features of the project are optimized. Thanks!

    Read the article

  • [AS3] Calling Php Script with UTF-8 POST variables

    - by kornelijepetak
    AS3 documentation says that Strings in AS3 are in UTF-16 format. There is a textbox on a Flash Clip where user can type some data. When a button is clicked, I want this data to be sent to a php script. I have everything set up, but it seems that the PHP script gets the data in UTF-16 format. The data in the database (which is utf-8) shows some unrecognizable characters (where special characters are used), meaning that the data has not been sent in a correct encoding. var variables:URLVariables=new URLVariables; var varSend:URLRequest=new URLRequest("http://website.com/systematic/accept.php"); varSend.method=URLRequestMethod.POST; varSend.data=variables; var varLoader:URLLoader=new URLLoader; varLoader.dataFormat=URLLoaderDataFormat.VARIABLES; varLoader.addEventListener(Event.COMPLETE, completeHandler); When submit button is clicked, the following handler gets executed. function sendData(event:MouseEvent) : void { // i guess here is the problem (tbName.text is UTF-16) variables.name = tbName.text; varLoader.load(varSend); } Is there any way to send the data so that PHP script gets the data in UTF-8 format? (PHP script is retrieving the value using the $_POST['name']).

    Read the article

  • Where is rebol fill-pen documented (to get glow effect on a round rectangle) ?

    - by Rebol Tutorial
    There is some discussion here about fill-pen http://www.mail-archive.com/[email protected]/msg02019.html But I can't see documentation about cubic, diamond, etc... effect for fill-pen in rebol's official doc ? I'm trying to draw some round rectangle with glowing effect but don't really understand the parameters I'm playing with so I can't get exactly what I'd like (I'd like the glow effect starting from the center not from the dark left top corner): view layout [ box 278x185 effect [ ; default box face size is 100x100 draw [ anti-alias on ; information for the next draw element (not required) line-width 2.5 ; number of pixels in width of the border pen black ; color of the edge of the next draw element ; fill pen is a little complex: ;fill-pen 10x10 0 90 0 1 1 0.0.0 255.0.0 255.0.255 fill-pen radial 20x20 5 55 5 5 10 0.0.0 55.0.5 55.0.5 ; the draw element box ; another box drawn as an effect 15 ; size of rounding in pixels 0x0 ; upper left corner 278x170 ; lower right corner ] ] ]

    Read the article

  • How do I append Word templates to a new document in VB.NET?

    - by Tom
    I'm poking around to see if this app can be done. Basically the end user needs to create a bunch of export documents that are populated from a database. There will be numerous document templates (.dot) and the end result will be the user choosing templates x y and z to include for documentation, click a button and have the app create a new Word document, append the templates, and then populate the templates with the appropriate data. The reason it needs to be done in Word as opposed to something like Crystal Reports is that the user may customize some fields before printing the documents as it can vary from export to export. Is this possible to do through VB.NET (VS 2010)? I assume it is but I'm having difficulty tracking down a solution. Or alternatively is there a better solution? Here's what I have so far (not much I know) Import Microsoft.Office.Interop Public Class Form1 Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim oWord As Word.Application Dim oDoc As Word.Document oWord = CreateObject("Word.Application") oWord.Visible = False oDoc = oWord.Documents.Add 'Open templates x.dot, y.dot, z.dot 'Append above templates to new document created 'Populate new document oWord.Visible = True End Sub End Class Thanks.

    Read the article

  • Intel MKL memory management and exceptions

    - by Andrew
    Hello everyone, I am trying out Intel MKL and it appears that they have their own memory management (C-style). They suggest using their MKL_malloc/MKL_free pairs for vectors and matrices and I do not know what is a good way to handle it. One of the reasons for that is that memory-alignment is recommended to be at least 16-byte and with these routines it is specified explicitly. I used to rely on auto_ptr and boost::smart_ptr a lot to forget about memory clean-ups. How can I write an exception-safe program with MKL memory management or should I just use regular auto_ptr's and not bother? Thanks in advance. EDIT http://software.intel.com/sites/products/documentation/hpc/mkl/win/index.htm this link may explain why I brought up the question UPDATE I used an idea from the answer below for allocator. This is what I have now: template <typename T, size_t TALIGN=16, size_t TBLOCK=4> class aligned_allocator : public std::allocator<T> { public: pointer allocate(size_type n, const void *hint) { pointer p = NULL; size_t count = sizeof(T) * n; size_t count_left = count % TBLOCK; if( count_left != 0 ) count += TBLOCK - count_left; if ( !hint ) p = reinterpret_cast<pointer>(MKL_malloc (count,TALIGN)); else p = reinterpret_cast<pointer>(MKL_realloc((void*)hint,count,TALIGN)); return p; } void deallocate(pointer p, size_type n){ MKL_free(p); } }; If anybody has any suggestions, feel free to make it better.

    Read the article

  • Two Applications using the same index file with Hibernate Search

    - by Dominik Obermaier
    Hi, I want to know if it is possible to use the same index file for an entity in two applications. Let me be more specific: We have an online Application with a frondend for the users and an application for the backend tasks (= administrator interface). Both are running on the same JBOSS AS. Both Applications are using the same database, so they are using the same entities. Of course the package names are not the same in both applications for the entities. So this is our usecase: A user should be able to search via the frondend. The user is only allowed to see results which are tagged with "visible". This tagging happens in our admin interface, so the index for the frontend should be updated every time an entity is tagged as "visible" in the backend. Of course both applications do have the same index root folder. In my index folder there are 2 index files: de.x.x.admin.model.Product de.x.x.frondend.model.Product How to "merge" this via Hibernate Search Configuration? I just did not get it via the documentation... Thanks for any help!

    Read the article

< Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >