Search Results

Search found 18761 results on 751 pages for 'lot'.

Page 605/751 | < Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >

  • Generating PDF document using XSLT

    - by Nair
    I have one huge XML document. I have set of XSL representing each node in the XML. These XSL also have java script to generate the dynamic content. It uses images which are in seperate images folder and it uses fonts as well. At present, I have a program which displays all the nodes that can be transformed and user click on one of the node and the program performs XSLT and display the content in HTML format on IE screen. I want to write a program (.Net , C# or any .Net language) which will allow user to do XSLT tranform on all the available notes and create one PDF document. My initial requirement was to display all the document in the IE itself, so I reused the existing code, and foreach node, perform XSLT and then append it to the current HTML with a page break and it worked ok till we hit huge files. So the requirement changed to create one PDF file with all the nodes. I have couple of questions, 1. What is the best way to create PDF file using XSLT transformation? 2. Since the images are relative path, if we generate the XSLT in html and then write it to a output stream will it loose the images? 3. Will the font be preserved in the PDF document? Really appriciate if someone could point me to some good example that I can take and run with it. Thanks a lot.

    Read the article

  • What are the most time consuming checks performed by .NET when executing a managed appplication?

    - by ltorje
    I've developed a .NET based Windows service that uses part managed (C#) and unmanaged code (C/C++ libraries). In some domain environments (e.g. Win 2k3 32bit server inside domain abc.com) sometimes the service takes more than 30 seconds to start (especially on OS restart), thus failing to start the service. I suspect that it has something to do with enterprise level security but I do not know for sure. http://msdn.microsoft.com/en-us/library/aa720255%28VS.71%29.aspx I've tried the following without success: - delay loading references by moving the using directives as far as possible from the servicebase implementation (especially the xml namespace - know to cause delays in loading) - delay loading and configuring log4net - precompiling the code by using ngen - delaying the start of the worker thread - add/remove manifest + decencies set inside - sign/unsign the binaries - use the configuration settings (there are a lot of settings and the scope level for all is set to application ) as later as possible - add all dependencies to GAC I didn't tried yet to add security demands for the class that has the Main method implemented. I didn't tries to implement my own configuration loader because after inspecting the autogenerated code, I've noticed that the setting class is a singletone and it gets its instance on call. By completely removing the log4net dependency it worked, but this is not an option. When the network card is disabled the service starts immediately. Any suggestions/comments/solution you have would be most welcomed.

    Read the article

  • How do the size standard libraries compare for different languages

    - by Roman A. Taycher
    Someone was recently raving about how great jQuery was and how it made javascript into a pleasure and also how the whole source code was so small(and one file). I looked it up on www.ohloh.net/ and it said it was about 30,000 lines of javascript, when I tired curl piped to wc it said about 5000 lines(strange discrepancy that, maybe test suites, ect?). I thought well it isn't that strange since javascript from what I've heard has a lot of fun dynamic tricks, so you can probably get away with a small library. But then I thought what about other high level languages, the ones with large standard libraries and wondered how big the standard are for python/ruby/haskell/pharo(smalltalk)/*ml/ect. (libraries not vm stuff to the degree its possible to separate it) Anybody know? Any details (comment/blank/code lines , test code lines, lines in language vs lines in ffi/byte-code) are appreciated! edit: ps. since it started this me asking about jQuery as a bonus if you could please list the size of mega frameworks, a megaframewok provides so much that people using an x megaframework in language y might sometimes refer to programming in xy or even x rather then in y (ie. : qt, jQuery, etc.).

    Read the article

  • tipfy for Google App Engine: Is it stable? Can auth/session components of tipfy be used with webapp?

    - by cv12
    I am building a web application on Google App Engine that requires users to register with the application and subsequently authenticate with it and maintain sessions. I don't want to force users to have Google accounts. Also, the target audience for the application is the average non-geek, so I'm not very keen on using OpenID or OAuth. I need something simple like: User registers with an e-mail and password, and then can log back in with those credentials. I understand that this approach does not provide the security benefits of Google or OpenID authentication, but I am prepared to trade foolproof security for end-user convenience and hassle-free experience. I explored Django, but decided that consecutive deprecations from appengine-helper to app-engine-patch to django-nonrel may signal that path may be a bit risky in the long-term. I'd like to use a code base that is likely to be maintained consistently. I also explored standalone session/auth packages like gaeutilities and suas. GAEUtilities looked a bit immature (e.g., the code wasn't pythonic in places, in my opinion) and SUAS did not give me a lot of comfort with the cookie-only sessions. I could be wrong with my assessment of these two, so I would appreciate input on those (or others that may serve my objective). Finally, I recently came across tipfy. It appears to be based on Werkzeug and Alex Martelli spoke highly of it here on stackoverflow. I have two primary questions related to tipfy: As a framework, is it as mature as webapp? Is it stable and likely to be maintained for some time? Since my primary interest is the auth/session components, can those components of the tipfy framework be used with webapp, independent of the broader tipfy framework? If yes, I would appreciate a few pointers to how I could go about doing that.

    Read the article

  • File Storage for Web Applications: Filesystem vs DB vs NoSQL engines

    - by El Yobo
    I have a web application that stores a lot of user generated files. Currently these are all stored on the server filesystem, which has several downsides for me. When we move "folders" (as defined by our application) we also have to move the files on disk (although this is more due to strange design decisions on the part of the original developers than a requirement of storing things on the filesystem). It's hard to write tests for file system actions; I have a mock filesystem class that logs actions like move, delete etc, without performing them, which more or less does the job, but I don't have 100% confidence in the tests. I will be adding some other jobs which need to access the files from other service to perform additional tasks (e.g. indexing in Solr, generating thumbnails, movie format conversion), so I need to get at the files remotely. Doing this over network shares seems dodgy... Dealing with permissions on the filesystem as sometimes given us problems in the past, although now that we've moved to a pure Linux environment this should be less of an issue. What are the downsides of storing files as BLOBs in MySQL? I guess that it would massively increase the database size and reduce the effectiveness of caches, but are there other problems? Do the same problems exist with NoSQL systems like Cassandra? Does anyone have any other suggestions that might be appropriate?

    Read the article

  • Update C# Chart using BackgroundWorker

    - by Mark
    I am currently trying to update a chart which is on my form to the background worker using: bwCharter.RunWorkerAsync(chart1); Which runs: private void bcCharter_DoWork(object sender, DoWorkEventArgs e) { System.Windows.Forms.DataVisualization.Charting.Chart chart = null; // Convert e.Argument to chart //.. // Converted.. chart.Series.Clear(); e.Result=chart; setChart(c.chart); } private void setChart(System.Windows.Forms.DataVisualization.Charting.Chart arg) { if (chart1.InvokeRequired) { chart1.Invoke(new MethodInvoker(delegate { setChart(arg); })); return; } chart1 = arg; } However, at the point of clearing the series, an exception is thrown. Basically, I want to do a whole lot more processing after clearing the series, which slows the GUI down completely - so wanted this in another thread. I thought that by passing it as an argument, I should be safe, but apparently not! Interestingly, the chart is on a tab page. I can run this over and over if the tabpage is in the background, but if I run this, look at the chart, hide it again, and re-run, it throws the exception. Obviously, it throws if the chart is in the foreground as well. Can anyone suggest what I can do differently? Thanks!

    Read the article

  • Need guidelines for optimizing WebGL performance by minimizing shader changes

    - by brainjam
    I'm trying to get an idea of the practicality of WebGL for rendering large architectural interior scenes, consisting of 100K's of triangles. These triangles are distributed over many objects, and there are many materials in the scene. On the other hand, there are no moving parts. And the materials tend to be fairly simple, mostly based on texture maps. There is a lot of texture map sharing .. for example all the chairs in scene will share a common map. There is also some multitexturing - up to three textures overlaid in a material. I've been doing a little experimentation and reading, and gather that frequently switching materials during a rendering pass will slow things down. For example, a scene with 200K triangles will have significant performance differences, depending on whether there are 10 or 1000 objects, assuming that each time an object is displayed a new material is set up. So it seems that if performance is important the scene should be sorted by materials so as to minimize material switching. What I'm looking for is guidelines on how to think of the overhead of various state changes, and where do I get the biggest bang for the buck. For example, what are the relative performance costs of, say, gl.useProgram(), gl.uniformMatrix4fv(), gl.drawElements() should I try to write ubershaders to minimize shader switching? should I try to aggregate geometry to minimize the number of gl.drawElements() calls I realize that mileage may vary depending on browser, OS, and graphics hardware. And I'm also not looking for heroic measures. Just some guidelines from people who have already had some experience in making scenes fast. I'll add that while I've had some experience with fixed-pipeline OpenGL programming in the past, I'm rather new to the WebGL/OpenGL ES 2.0 way of doing things.

    Read the article

  • realtime visitors with nodejs & redis & socket.io & php

    - by orhan bengisu
    I am new to these tecnologies. I want to get realtime visitor for each products for my site. I mean a notify like "X users seeing this product". Whenever an user connects to a product counter will be increased for this product and when disconnects counter will be decreased just for this product. I tried to search a lots of documents but i got confused. I am using Predis Library for PHP. What i have done may totaly be wrong. I am not sure Where to put createClient , When to subscribe & When to unsubscribe. What I have done yet: On product detail page: $key = "product_views_".$product_id; $counter = $redis->incr($key); $redis->publish("productCounter", json_encode(array("product_id"=> "1000", "counter"=> $counter ))); In app.js var app = require('express')() , server = require('http').createServer(app) , socket = require('socket.io').listen(server,{ log: false }) , url = require('url') , http= require('http') , qs = require('querystring') ,redis = require("redis"); var connected_socket = 0; server.listen(8080); var productCounter = redis.createClient(); productCounter.subscribe("productCounter"); productCounter.on("message", function(channel, message){ console.log("%s, the message : %s", channel, message); io.sockets.emit(channel,message); } productCounter.on("unsubscribe", function(){ //I think to decrease counter here, Shold I? and How? } io.sockets.on('connection', function (socket) { connected_socket++; socket_id = socket.id; console.log("["+socket_id+"] connected"); socket.on('disconnect', function (socket) { connected_socket--; console.log("Client disconnected"); productCounter.unsubscribe("productCounter"); }); }) Thanks a lot for your answers!

    Read the article

  • Hook for adding new menu items,showing in wodpress header navbar not in admin menu?

    - by user1452376
    I want to add a new menu item by my plugin.I tried a lot but failed. What is the hook for creating a new items in the navbar menu?Please help. function add_new_item_in_nav_menu(){ ..... } action('init','add_new_item_in_nav_menu'); I know how to add the page by a hook function add_page2(){ global $user_ID; $new_page_title = 'abc'; $new_page_content = 'abc'; $new_page_template = ''; $page_check = get_page_by_title($new_page_title); $new_page = array( 'post_type' => 'page', 'post_title' => $new_page_title, 'post_content' => $new_page_content, 'post_status' => 'publish', 'post_author' => 1, ); if(!isset($page_check->ID)){ $new_page_id = wp_insert_post($new_page); if(!empty($new_page_template)){ update_post_meta($new_page_id, '_wp_page_template', $new_page_template); } } $homeSet = get_page_by_title( 'Home' ); update_option( 'page_on_front', $homeSet->ID ); update_option( 'show_on_front', 'page' ); } add_action( 'init', 'add_page2' );

    Read the article

  • Elegant way of parsing Data files for Simulation

    - by sc_ray
    I am working on this project where I need to read in a lot of data from .dat files and use the data to perform simulations. The data in my .dat file looks as follows: DeviceID InteractingDeviceID InteractionStartTime InteractionEndTime 1 2 1101 1105 1,2 1101 and 1105 are tab delimited and it means Device 1 interacted with Device 2 at 1101 ms and ended the interaction at 1105ms. I have a trace data sets that compile thousands of such interactions and my job is to analyze these interactions. The first step is to parse the file. The language of choice is C++. The approach I was thinking of taking was to read the file, for every line that's read create a Device Object. This Device object will contain the property DeviceId and an array/vector of structs, that will contain a list of all the devices the given DeviceId interacted with over the course of the simulation.The struct will contain the Interacting Device Id, Interaction Start Time and Interaction End Time. I have a two fold question here: Is my approach correct? If I am on the right track, how do I rapidly parse these tab delimited data files and create Device objects without excessive memory overhead using C++? A push in the right direction will be much appreciated. Thanks

    Read the article

  • How to migrate primary key generation from "increment" to "hi-lo"?

    - by Bevan
    I'm working with a moderate sized SQL Server 2008 database (around 120 tables, backups are around 4GB compressed) where all the table primary keys are declared as simple int columns. At present, primary key values are generated by NHibernate with the increment identity generator, which has worked well thus far, but precludes moving to a multiprocessing environment. Load on the system is growing, so I'm evaluating the work required to allow the use of multiple servers accessing a common database backend. Transitioning to the hi-lo generator seems to be the best way forward, but I can't find a lot of detail about how such a migration would work. Will NHibernate automatically create rows in the hi-lo table for me, or do I need to script these manually? If NHibernate does insert rows automatically, does it properly take account of existing key values? If NHibernate does take care of thing automatically, that's great. If not, are there any tools to help? Update NHibernate's increment identifier generator works entirely in-memory. It's seeded by selecting the maximum value of used identifiers from the table, but from that point on allocates new values by a simple increment, without reference back to the underlying database table. If any other process adds rows to the table, you end up with primary key collisions. You can run multiple threads within the one process just fine, but you can't run multiple processes. For comparison, the NHibernate identity generator works by configuring the database tables with identity columns, putting control over primary key generation in the hands of the database. This works well, but compromises the unit of work pattern. The hi-lo algorithm sits inbetween these - generation of primary keys is coordinated through the database, allowing for multiprocessing, but actual allocation can occur entirely in memory, avoiding problems with the unit of work pattern.

    Read the article

  • Which version of Grady Booch's OOA/D book should I buy?

    - by jackj
    Grady Booch's "Object-Oriented Analysis and Design with Applications" is available brand new in both the 2nd edition (1993) and the 3rd edition (2007), while many used copies of both editions are available. Here are my concerns: 1) The 2nd edition uses C++: given that I just finished reading my first two C++ books (Accelerated C++ and C++ Primer) I guess practical tips can only help, so the 2nd edition is probably best (I think the 3rd edition has absolutely no code). On the other hand, the C++ books I read insist on the importance of using standard C++, whereas Booch's 2nd edition was published before the 1998 standard. 2) The 2nd edition is shorter (608 pages vs. 720) so, I guess, it will be slightly easier to get through. 3) The 3rd edition uses UML 2.0, whereas the 2nd edition is pre-UML. Some reviews say that the notation in the 2nd edition is close enough to UML, so it doesn't matter, but I don't know if I should be worrying about this or not. 4) The 2nd edition is available in good-shape used copies for considerably less than what the 3rd one goes for. Given all the above factors, do you think I should buy the 2nd or the 3rd edition? Recommendations on other books are also welcome but I would prefer it if whoever answers has read at least one of the versions of Booch's book (preferably both!). I have already bought but not read GoF and Riel's books. I also know that I should practice a lot with real-life code. Thanks.

    Read the article

  • Which software for intranet CMS - Django or Joomla?

    - by zalun
    In my company we are thinking of moving from wiki style intranet to a more bespoke CMS solution. Natural choice would be Joomla, but we have a specific architecture. There is a few hundred people who will use the system. System should be self explainable (easier than wiki). We use a lot of tools web, applications and integrated within 3rd party software. The superior element which is a glue for all of them is API. In example for the intranet tools we do use Django, but it's used without ORM, kind of limited to templates and url - every application has an adequate methods within our API. We do not use the Django admin interface, because it is hardly dependent on ORM. Because of that Joomla may be hard to integrate. Every employee should be able to edit most of the pages, authentication and privileges have to be managed by our API. How hard is it to plug Joomla to use a different authentication process? (extension only - no hacks) If one knows Django better than Joomla, should Django be used?

    Read the article

  • Using Doctrine to abstract CRUD operations

    - by TomWilsonFL
    This has bothered me for quite a while, but now it is necessity that I find the answer. We are working on quite a large project using CodeIgniter plus Doctrine. Our application has a front end and also an admin area for the company to check/change/delete data. When we designed the front end, we simply consumed most of the Doctrine code right in the controller: //In semi-pseudocode function register() { $data = get_post_data(); if (count($data) && isValid($data)) { $U = new User(); $U->fromArray($data); $U->save(); $C = new Customer(); $C->fromArray($data); $C->user_id = $U->id; $C->save(); redirect_to_next_step(); } } Obviously when we went to do the admin views code duplication began and considering we were in a "get it DONE" mode so it now stinks with code bloat. I have moved a lot of functionality (business logic) into the model using model methods, but the basic CRUD does not fit there. I was going to attempt to place the CRUD into static methods, i.e. Customer::save($array) [would perform both insert and update depending on if prikey is present in array], Customer::delete($id), Customer::getObj($id = false) [if false, get all data]. This is going to become painful though for 32 model objects (and growing). Also, at times models need to interact (as the interaction above between user data and customer data), which can't be done in a static method without breaking encapsulation. I envision adding another layer to this (exposing web services), so knowing there are going to be 3 "controllers" at some point I need to encapsulate this CRUD somewhere (obviously), but are static methods the way to go, or is there another road? Your input is much appreciated.

    Read the article

  • Hadoop reduce task gets hung

    - by user806098
    I set up a hadoop cluster with 4 nodes, When running a map-reduce task, the map task finishes quickly, while the reduce task hangs at 27% percent. I checked the log, it's that the reduce task fails to fetch map output from map nodes. The job tracker log of master shows messages like this: 2011-06-27 19:55:14,748 INFO org.apache.hadoop.mapred.JobTracker: Adding task (REDUCE) 'attempt_201106271953_0001_r_000000_0' to tip task_201106271953_0001_r_000000, for tracker 'tracker_web30.bbn.com.cn:localhost/127.0.0.1:56476' And the name node log of master shows messages like this: 2011-06-27 14:00:52,898 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310, call register(DatanodeRegistration(202.106.199.39:50010, storageID=DS-1989397900-202.106.199.39-50010-1308723051262, infoPort=50075, ipcPort=50020)) from 192.168.225.19:16129: error: java.io.IOException: verifyNodeRegistration: unknown datanode 202.106.199.3 9:50010 However, neither the "web30.bbn.com.cn" or 202.106.199.39, 202.106.199.3 is the slave node. I think such ip/domains appear because hadoop fails to resolve a node(first in the Intranet DNS server), then it goes to a higher-level DNS server, later to the top, still fails, then the "junk" ip/domains are returned. But I checked my config, it goes like this: /etc/hosts: 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.168.225.16 master 192.168.225.66 slave1 192.168.225.20 slave5 192.168.225.17 slave17 conf/core-site.xml: hadoop.tmp.dir /root/hadoop_tmp/hadoop_${user.name} fs.default.name hdfs://master:54310 io.sort.mb 1024 hdfs-site.xml: dfs.replication 3 masters: master slaves: master slave1 slave5 slave17 Also, all firewalls(iptables) are turned off, and ssh between each 2 nodes is ok. so I don't know where exact the error comes from. Please help. Thanks a lot.

    Read the article

  • Javascript onclick stops working, multiple dynamically created divs.

    - by Patrick
    I have run into a strange problem, i am creating a lot of dynamically divs. And i recently found out that some of my divs doesn't fire the onclick event? All the divs are created using the same template, so why are some not working? Most of the time, its the 4-5 from the bottom. If you click on one of the others and then try again, you might get one of those to trigger. But only sporadically. Code to create the divs: GameField.prototype.InitField = function(fieldNumber) { var newField = document.createElement("div"); if (fieldNumber == 0 || fieldNumber == 6 || fieldNumber == 8 || fieldNumber == 17) newField.className = 'gameCellSmall borderFull gameText gameTextAlign'; else newField.className = 'gameCellSmall borderWithoutTop gameText gameTextAlign'; var instance = this; if (fieldNumber == 6 || fieldNumber == 7 || fieldNumber == 17) { } else newField.onclick = function() { instance.DivClick(fieldNumber); return false; } this.fields[fieldNumber] = newField; this.score[fieldNumber] = 0; return newField; } I added the return false to the click function, but it still behaves strangely. Why are some not triggering? I create around 18 divs / player. But it happens even if i just create one player. Do i perhaps need to cancel the event once i am done with it? (Like the return false; is trying to do)

    Read the article

  • Testing subpackage modules in Python 3

    - by Mitchell Model
    I have been experimenting with various uses of hierarchies like this and the differences between absolute and relative imports, and can't figure out how to do routine things with the package, subpackages, and modules without simply putting everything on sys.path. I have a two-level package hierarchy: MyApp __init__.py Application __init__.py Module1 Module2 ... Domain __init__.py Module1 Module2 ... UI __init__.py Module1 Module2 ... I want to be able to do the following: Run test code in a Module's "if main" when the module imports from other modules in the same directory. Have one or more test code modules in each subpackage that runs unit tests on the modules in the subpackage. Have a set of unit tests that reside in someplace reasonable, but outside the subpackages, either in a sibling package, at the top-level package, or outside the top-level package (though all these might end up doing is running the tests in each subpackage) "Enter" the structure from any of the three subpackage levels, e.g. run code that just uses Domain modules, run code that just uses Application modules, but Application uses code from both Application and Domain modules, and run code from GUI uses code from both GUI and Application; for instance, Application test code would import Application modules but not Domain modules. After developing the bulk of the code without subpackages, continue developing and testing after organizing the modules into this hierarchy. I know how to use relative imports so that external code that puts MyApp on its sys.path can import MyApp, import any subpackages it wants, and import things from their modules, while the modules in each subpackage can import other modules from the same subpackage or from sibling packages. However, the development needs listed above seem incompatible with subpackage structuring -- in other words, I can't have it both ways: a well-structured multi-level package hierarchy used from the outside and also used from within, in particular for testing but also because modules from one design level (in particular the UI) should not import modules from a design level below the next one down. Sorry for the long essay, but I think it fairly represents the struggles a lot of people have been having adopting to the new relative import mechanisms.

    Read the article

  • One config file for multiple environments

    - by ho
    I'm currently working with systems that has quite a lot of configuration settings that are environment specific (Dev, UAT, Production). Does anyone have any suggestions for minimizing the changes needed to the config file when moving between environments as well as minimizing the duplication of data in the config file? It's mostly Application settings rather than User settings. The way I'm doing it at the moment is something similar to this: <DevConnectionString>xyz</DevConnectionString> <DevInboundPath>xyz</DevInboundPath> <DevProcessedPath>xyz</DevProcessedPath> <UatConnectionString>xyz</UatConnectionString> <UatInboundPath>xyz</UatInboundPath> <UatProcessedPath>xyz</UatProcessedPath> ... <Environment>Dev</Environment> And then I have a class that reads in the Environment setting via the My.Settings class (it's VB project) and then uses that to decide what other settings to retrieve. This leads to too much duplication though so I'm not sure if it's worth it.

    Read the article

  • Why are my Flex resource bundles not being loaded?

    - by Chris R
    I have an Actionscript module in the flex source folder filterModules, which is one of two additional source folders in my project (the main source folder is reports, but I'm not dealing with anything in there right now). Here's the MXML content that references the resources. ... This array is assigned to the dataProvider field of a ComboBox. It's not bound using the bindings, presumably for reasons that made sense to the original developer, and it'd be nontrivial to change the class to make that happen. I additionally have a resource property file in a folder resources/en_US and I have the source folder resources/{locale} in the project source settings. My additional compiler options are -locale en_US. The resource property file is resources/en_US/labels.properties (All paths are relative to the flash builder project root) and contains (amongst other things) these keys: metric.q3 = Overall Satisfaction metric.q5 = Personnel metric.q9a = Issue Resolution metric.q42 = Visit Duration Sat metric.q34 = Visit Duration I have written some FlexUnit tests that run in my local Flash Player that exercise these resources -- they check that every label is represented in the metrics array, for example, so I know that the resource file is loaded when run locally. However, when I copy the module .swf file over to my server, the combo box to which the array is assigned is empty. I copy the .swf like so, if it matters: rsync -rlDv --inplace -T /tmp ~/projects/flex_reports/bin-debug/rankingFilter.swf HOSTNAME:WEBROOT/flashPath/ Why is this? I am not able to debug the remote module because our surrounding site sets up a lot of context and makes some database calls to determine which module to load. I'm hoping to get some pointers on why resource bundles might not show up. I'd understand it if the array was present with wrong labels, but the array is instead completely empty, which is pretty odd.

    Read the article

  • Core principles, rules, and habits for CS students

    - by Asad Butt
    No doubt there is a lot to read on blogs, in books, and on Stack Overflow, but can we identify some guidelines for CS students to use while studying? For me these are: Finish your course books early and read 4-5 times more material relative to your course work. Programming is the one of the fastest evolving professions. Follow the blogs on a daily basis for the latest updates, news, and technologies. Instead of relying on assignments and exams, do at least one extra, non-graded, small to medium-sized project for every programming course. Fight hard for internships or work placements even if they are unpaid, since 3 months of work 1 year at college. Practice everything, every possible and impossible way. Try doing every bit of your assignments project yourself; i.e. fight for every inch. Rely on documentation as the first source for help and samples, Google, and online forums as the last source. Participate often in online communities and forums to learn the best possible approach for every solution to your problem. (After doing your bit.) Make testing one of your habits as it is getting more important everyday in programming. Make writing one of your habits. Write something productive once or twice a week and publish it.

    Read the article

  • how do simple SQLAlchemy relationships work?

    - by Carson Myers
    I'm no database expert -- I just know the basics, really. I've picked up SQLAlchemy for a small project, and I'm using the declarative base configuration rather than the "normal" way. This way seems a lot simpler. However, while setting up my database schema, I realized I don't understand some database relationship concepts. If I had a many-to-one relationship before, for example, articles by authors (where each article could be written by only a single author), I would put an author_id field in my articles column. But SQLAlchemy has this ForeignKey object, and a relationship function with a backref kwarg, and I have no idea what any of it MEANS. I'm scared to find out what a many-to-many relationship with an intermediate table looks like (when I need additional data about each relationship). Can someone demystify this for me? Right now I'm setting up to allow openID auth for my application. So I've got this: from __init__ import Base from sqlalchemy.schema import Column from sqlalchemy.types import Integer, String class Users(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True) username = Column(String, unique=True) email = Column(String) password = Column(String) salt = Column(String) class OpenID(Base): __tablename__ = 'openid' url = Column(String, primary_key=True) user_id = #? I think the ? should be replaced by Column(Integer, ForeignKey('users.id')), but I'm not sure -- and do I need to put openids = relationship("OpenID", backref="users") in the Users class? Why? What does it do? What is a backref?

    Read the article

  • Combining cache methods - memcache/disk based

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • How does the socket API accept() function work?

    - by Eli Bendersky
    The socket API is the de-facto standard for TCP/IP and UDP/IP communications (that is, networking code as we know it). However, one of its core functions, accept() is a bit magical. To borrow a semi-formal definition: accept() is used on the server side. It accepts a received incoming attempt to create a new TCP connection from the remote client, and creates a new socket associated with the socket address pair of this connection. In other words, accept returns a new socket through which the server can communicate with the newly connected client. The old socket (on which accept was called) stays open, on the same port, listening for new connections. How does accept work? How is it implemented? There's a lot of confusion on this topic. Many people claim accept opens a new port and you communicate with the client through it. But this obviously isn't true, as no new port is opened. You actually can communicate through the same port with different clients, but how? When several threads call recv on the same port, how does the data know where to go? I guess it's something along the lines of the client's address being associated with a socket descriptor, and whenever data comes through recv it's routed to the correct socket, but I'm not sure. It'd be great to get a thorough explanation of the inner-workings of this mechanism.

    Read the article

  • Synchronizing Access to a member of the ASP.NET session

    - by Sam
    I'm building a Javascript application and eash user has an individual UserSession. The application makes a bunch of Ajax calls. Each Ajax call needs access to a single UserSession object for the user. Each Ajax call needs a UserSession object. Data in the UserSession object is unique to each user. Originally, during each Ajax call I would create a new UserSession object and it's data members were stored in the ASP.NET Session. However, I found that the UserSession object was being instantiated a lot. To minimize the construction of the UserSession object, I wrapped it in a Singleton pattern and sychronized access to it. I believe that the synchronization is happening application wide, however I only need it to happen per user. I saw a post here that says the ASP.NET cache is synchronized, however the time between creating the object and inserting it into the cache another Thread could start construction it's another object and insert it into the cache. Here is the way I'm currently synchronizing access to the object. Is there a better way than using "lock"... should be be locking on the HttpContext.Session object? private static object SessionLock = new object(); public static WebSession GetSession { get { lock (SessionLock) { try { var context = HttpContext.Current; WebSession result = null; if (context.Session["MySession"] == null) { result = new WebSession(context); context.Session["MySession"] = result; } else { result = (WebSession)context.Session["MySession"]; } return result; } catch (Exception ex) { ex.Handle(); return null; } } } }

    Read the article

  • Database for Python Twisted

    - by Will
    There's an API for Twisted apps to talk to a database in a scalable way: twisted.enterprise.dbapi The confusing thing is, which database to pick? The database will have a Twisted app that is mostly making inserts and updates and relatively few selects, and then other strictly-read-only clients that are accessing the database directly making selects. (The read-only users are not necessarily selecting the data that the Twisted app is inserting; its not as though the database is being used as a message-queue) My understanding - which I'd like corrected/adviced - is that: Postgres is a great DB, but all the Python bindings - and there is a confusing maze of them - are abandonware There is psycopg2, but that makes a lot of noise about doing its own connection-pooling and things; does this co-exist gracefully/usefully/transparently with the Twisted async database connection pooling and such? SQLLite is a great database for little things but if used in a multi-user way it does whole-database locking, so performance would suck in the usage pattern I envisage MySQL - after the Oracle takeover, who'd want to adopt it now or adopt a fork? Is there anything else out there?

    Read the article

< Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >