Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 298/457 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • How can I transition from a front-end career to back-end career?

    - by jexx2345
    Hello, In 2004, I received my B.S. in Computer Science using mostly Java for programming. Since then, I have been hired for purely front-end positions in large companies through recruiters, doing primarily HTML/CSS, Javascript/jQuery, and OOP Actionscript 3. While I definitely have respect for the front-end, and have learned much, I feel very unfulfilled and would love to move into back-end for the more complex programming I was doing in school. My target platform of choice is ASP.Net 3.5 (C#). With a resume that has absolutely no .Net or back-end experience, how can I transition into a junior back-end job? I am currently freelancing a .Net e-commerce site, and plan to build a portfolio showcasing some apps (e-commerce mvc, blog, etc), while learning the technology. Is having a Bachelors in CS enough or should I look into getting .Net certified? Is showing course-work still relevant to an employer even though it was from 6 years ago? Thank you

    Read the article

  • Getting Prepared/Planning

    - by The.Anti.9
    For the first time, I am trying to create a rather large .NET project. I think the largest one I have made so far was about 6 classes, but this one is already at 14. For the section I'm starting to work on, I'm having some trouble putting everything together in my head, which is what I normally do. I think it's just a little too complex for that. I want to plan it out, and I want some way to visualize it and be able to play with it and manipulate the structure easily. Is there any sort of (free) program I can use to do this?

    Read the article

  • Visual Studio Add-in - How do I create an Installer?

    - by Morgeh
    I have made a Visual Studio Add-in as part of a project I'm working on using web services. When I created the new Add-in project in visual studio it generated all the code required and installed the blank add-in on my pc (I assume). Since this is a large project we are using svn to manage the code base and once I had done some of the work on the Add-in I commited it, then checked it out on a different pc and attempted to run it. However on the other pc when I run the add-in in debug mode, the tools entry for the add-in is not present and I can't run the add-in. Am I right in assuming that when I created the project on the other pc it installed the plugin aswell?? and does that mean that I will need to create an installer for any other pcs I wish to use? Obviously at some point I intend on making an installer anyway but not untill after the development of the addin is complete.

    Read the article

  • Load page, wait, log any JavaScript errors to file?

    - by David
    I would like to check a large number of HTML files with inline JavaScript for JavaScript errors. What I'm envisioning doing is this: Script a browser to load a given page, wait a few seconds, and finally check the browser logs. I'm unsure though both on how to script a browser to load a given page and on how to access the JavaScript error log. I think the type of errors I'm worried about should show up in any modern browser so I would just go with whatever makes it most convenient. I'd be working either under Mac OS X or Linux. Anybody already tackle a similar problem? I've thought a bit about hacking something together based on a unit testing framework -- generate a trivial (assertTrue(true)) test for each page and rely on the errors making it fail -- but I'm hoping for something more elegant. Thank you.

    Read the article

  • Eager fetching of children with JDO (Datanucleus)

    - by Jan
    Hi, can JDO fetch all children of a database model at once? Like: class Parent { @Persistent(mappedBy="parent") private Set<Children> children; } class Children { @Persistent private Parent parent; @Persistent private String name; } In my case, I have a large number of Parents which I fetch at once. Accessing their children then takes a lot of time because they are fetched lazily. Does JDO (Datanucleus) support their fetching at once, togehter with the Parents? I also tried to fetch all Children independantly with another quey and put them into the Level2 cache afterwards, but still they are fetched (maybe jdo doesn't know about their relationship? Because the ForeignKey (parent-id) hasn't been fetched at first?) Any ideas how to read the data structure faster? Cheers, Jan

    Read the article

  • SelfReferenceProperty vs. ListProperty Google App Engine

    - by John
    Hi All, I am experimenting with the Google App Engine and have a question. For the sake of simplicity, let's say my app is modeling a computer network (a fairly large corporate network with 10,000 nodes). I am trying to model my Node class as follows: class Node(db.Model): name = db.StringProperty() neighbors = db.SelfReferenceProperty() Let's suppose, for a minute, that I cannot use a ListProperty(). Based on my experiments to date, I can assign only a single entity to 'neighbors' - and I cannot use the "virtual" collection (node_set) to access the list of Node neighbors. So... my questions are: Does SelfReferenceProperty limit you to a single entity that you can reference? If I instead use a ListProperty, I believe I am limited to 5,000 keys, which I need to exceed. Thoughts? Thanks, John

    Read the article

  • Are multiline tooltips possible using CWnd::EnableTooltips()?

    - by ctoneal
    I'm attempting to make my tooltips multiline, but I don't seem to be having much luck with it. I call CWnd::EnableTooltips() directly after creation (in this case, an edit box) and I handle the TTN_NEEDTEXT message. My tooltips display correctly, but only display as a single line. I've tried adding '\n' to the string I pass when handling TTN_NEEDTEXT, and also tried '\r\n'. No luck. It just displays them as normal text in the tooltip string. I then tried manually inserting 0x0D0A, but this just displays as boxes. I've been digging a bit, and have found a few offhand references on the web saying that multiline behavior may not work when using tooltips through the CWnd functions. I'd prefer not to have to replace with CToolTipCtrl (since it's a rather large project). Has anyone ran into this before? If so, is there any way around it?

    Read the article

  • Unix: millionth number in the serie 2 3 4 6 9 13 19 28 42 63 ... ?

    - by HH
    It takes about minute to achieve 3000 in my comp but I need to know the millionth number in the serie. The definition is recursive so I cannot see any shortcuts except to calculate everything before the millionth number. How can you fast calculate millionth number in the serie? Serie Def n_{i+1} = \floor{ 3/2 * n_{i} } and n_{0}=2. Interestingly, only one site list the serie according to Goolge: this one. Too slow Bash code #!/bin/bash function serie { n=$( echo "3/2*$n" | bc -l | tr '\n' ' ' | sed -e 's@\\@@g' -e 's@ @@g' ); # bc gives \ at very large numbers, sed-tr for it n=$( echo $n/1 | bc ) #DUMMY FLOOR func } n=2 nth=1 while [ true ]; #$nth -lt 500 ]; do serie $n # n gets new value in the function throught global value echo $nth $n nth=$( echo $nth + 1 | bc ) #n++ done

    Read the article

  • PHP upload with progress bar

    - by Mitchan Adams
    Hi all I want to create an upload form to upload large files. Thats pretty much easy, however, the upload process itself taks long and basically looks like nothing is happening for a few minutes. So now I'd like to insert a progress bar to show the user that something is happening and they should just sit tight. I've read of numerous methods like APC and certian flash plugins, but my site is hosted on a shared server and I cant install any new applications on it. I'm thinking, maybe if it is possible to read the size of the temp file it creates via an ajax page. By polling the size every few seconds I should be able to get the progress of the upload. Now the question I pose is...where is the temp file situated?

    Read the article

  • What's the ROI of using a build tool like ant or nant?

    - by leeand00
    To me this sounds like a really stupid question. Why would you not use a build tool? However, I need to explain my co-worker why he should be using a build tool of some sort. He's getting really into the idea of working as a team with more programmers, but he isn't understanding the bigger picture of what needs to change in the build process in order to work with a larger team; (i.e. defensive programming/unit testing your code, having a bug database, programming modular libraries, and using sub-repositories to store modules in version control. This is a rather large stack of technologies that I need to prove the ROI of...so I figured I'd start with the ROI of using a build tool rather than just...say...clicking compile.

    Read the article

  • Newtonsoft.json throwing error: Array was not a one-dimensional array.

    - by SIA
    Hi everybody, I am getting an error when trying to serialize an object products. Product product = new Product(); product.Name = "Apple"; product.Expiry = new DateTime(2008, 12, 28); product.Price = 3.99M; product.Sizes = new string[3,2] { {"Small","40"}, {"Medium","44"}, {"Large","50"} }; string json = JsonConvert.SerializeObject(product);//this line is throwing an error Array was not a one-dimensional array Is there any way to serialize a two dimensional array with Newtonsoft.json Thanks in Advance. SIA

    Read the article

  • Graph diffing and versioning tool

    - by hashable
    I am working with a team that edits large DAGs represented as single files. Currently we are unable to work with multiple users concurrently modifying the DAG. Is there a tool (somewhat like the Eclipse SVN plugin) that can do do revision control on the file (manage timestamps/revision stamps) to identify incoming/outgoing/conflicting changes (Node/Link insertion/deletion/modification) and merge changes just like programmers do with source code files? The system should be able to do dependency management also. E.g. an incoming Link must not be accepted when one of the two Nodes is absent. That is, it should not "break" the existing DAG by allowing partial updates. If there is a framework to do this using generic "Node" and "Link" interfaces? Note: I am aware of Protege and its plugins. They currently do not satisfy my requirements.

    Read the article

  • Optimal Sharing of heavy computation job using Snow and/or multicore

    - by James
    Hi, I have the following problem. First my environment, I have two 24-CPU servers to work with and one big job (resampling a large dataset) to share among them. I've setup multicore and (a socket) Snow cluster on each. As a high-level interface I'm using foreach. What is the optimal sharing of the job? Should I setup a Snow cluster using CPUs from both machines and split the job that way (i.e. use doSNOW for the foreach loop). Or should I use the two servers separately and use multicore on each server (i.e. split the job in two chunks, run them on each server and then stich it back together). Basically what is an easy way to: 1. Keep communication between servers down (since this is probably the slowest bit). 2. Ensure that the random numbers generated in the servers are not highly correlated.

    Read the article

  • OpenGL: Textured Primitives + High Framerate

    - by James D
    Short version: What's the best practice going forward for efficiently rendering large numbers of independent texture-mapped, lighted 2D/3D primitives (circles, rects, etc.) in OpenGL? For example: a typical particle system using billboarded quads/triangles, point sprites, or whatever other technique, with blending. Because after reading this thread on the messiness of OpenGL versioning/deprecation I'm starting to have my doubts. My specific question is not the ABCs of displaying primitives in OpenGL, but rather how to do so efficiently in post-deprecation (or pre-deprecation) OpenGL, in a way that's going to be compatible with a wide range of commodity hardware and in a way that's not going to break or itself get deprecated, five years down the line. Thanks!

    Read the article

  • Android: Saving custom button and spinner

    - by Jacob Huggart
    Hello All, I am new to Android programming and was handed a fairly large program that is almost complete, but needed support for switching between portrait and landscape view. I added android:configChanges="keyboardHidden|orientation" to the manifest and used onConfigurationChanged to save the view data and that works. However, there is a button that displays the date selected (when pressed a calendar to select the date comes up) and a spinner that displays the current view and is used to select a new view. Those two items are being cleared/reset and do not work at all after the screen flip. I have been attempting to use onSaveInstanceState and onRestoreInstanceState to fix that, but I cannot figure out how to get it to work. Any advice?

    Read the article

  • Coldfusion returning typed objects / AMF remoting

    - by Chin
    Is the same possible in ColdFusion? Currently I am using .Net/Fluorine to return objects to the client. Whilst in testing I like to pass strings representing the select statement and the custom object I wish to have returned from my service. Fluorine has a class ASObject to which you can set the var 'typeName'; which works great. I am hoping that this is possible in Coldfusion. Does anyone know whether you can set the type of the returned object in a similar way. This is especially helpful with large collections as the flash player will convert them to a local object of the same name thus saving interating over the collection to convert the objects to a particular custom object. foreach (DataRow row in ds.Tables[0].Rows) { ASObject obj = new ASObject(); foreach (DataColumn col in ds.Tables[0].Columns) { obj.Add(col.ColumnName, row[col.ColumnName]); } obj.TypeName = pObjType; al.Add(obj); } Many thanks,

    Read the article

  • How to decouple an app's agile development from a database using BDUF?

    - by Rob Wells
    G'day, I was reading the article "Database as a Fortress" by Dan Chak from the excellent book "97 Things Every Software Architect Should Know" (sanitised Amazon link) which suggests that databases should not be designed using an agile approach. There's an SO question on agile approaches and databases "Agile development and database changes" which has some excellent answers covering agile development approaches. In fact, one of the answers supplies a brilliant idea of what's needed for each update of the DB. ;-) But after reading Dan Chak's article, I am left wondering if an agile approach is really suitable for large scale systems. This of course leads on to the question of how best to decouple an agile approach for the application that is interacting with the BDUF database design without adding complicated translation layers in the final design employed? Any suggestions? cheers,

    Read the article

  • Program ends abruptly even in debugger - how did that happen?

    - by Mick
    I am trying to debug a program that unexpectedly shuts down. When I say "shuts down, I mean one moment I am seeing all the windows being displayed, each of which is showing all the right data,then suddenly all the windows disappear. The is no messagebox reporting anything wrong. So I tried running the program in the debugger hoping that it would somehow trap whatever was causing the program to abort, but even within the debugger the program simply ends abruptly. The last line in the debugger is: The program '[5500] test.exe: Native' has exited with code 0 (0x0). My program which is extremely large and extremely old has a lot of self diagnostics. My suspicion is that perhaps a self test has failed and maybe I just called "exit()", forgetting to pop up a dialog explaining why. My question now is, how can I find out from which point in the code, my program quit?

    Read the article

  • Can I have one makefile to build a hierarchical project?

    - by saramah
    I have several hundred files in a non-flat directory structure. My Makefile lists each sourcefile, which, given the size of the project and the fact that there are multiple developers on the project, can create annoyances when we forget to put a new one in or take out the old ones. I'd like to generalize my Makefile so that make can simply build all .cpp and .h files without me having to specify all the filenames, given some generic rules for different types of files. My question: given a large number of files in a directory with lots of subfolders, how do I tell make to build them all without having to specify each and every subfolder as part of the path? And how do I make it so that I can do this with only one Makefile in the root directory?

    Read the article

  • Is there a built-in way to determine the size of a WCF response?

    - by jaminto
    Before a client gets the full payload of the web request, we'd like to first send it a measurement of the size of the response it will get. If the response will be too large, the client will present a message to the user giving them the option to abort the operation. We can write some custom code to preload the response on the server, determine the size, and then pass it on to the client, but we'd rather not if there's another way to do it. Does anyone know if WCF has any tricky way to do this? Or are there any free third party tools out there that will accomplish this? Thanks.

    Read the article

  • Entity Framework How to specify paramter type in generated SQL (SQLServer 2005) Nvarchar vs Varchar

    - by Gratzy
    In entity framework I have an Entity 'Client' that was generated from a database. There is a property called 'Account' it is defined in the storage model as: <Property Name="Account" Type="char" Nullable="false" MaxLength="6" /> And in the Conceptual Model as: <Property Name="Account" Type="String" Nullable="false" /> When select statements are generated using a variable for Account i.e. where m.Account == myAccount... Entity Framework generates a paramaterized query with a paramater of type NVarchar(6). The problem is that the column in the table is data type of char(6). When this is executed there is a large performance hit because of the data type difference. Account is an index on the table and instead of using the index I believe an Index scan is done. Anyone know how to force EF to not use Unicode for the paramater and use Varchar(6) instead?

    Read the article

  • Create ViewStack in Actionscript with creationPolicy = "auto"

    - by Ivan Zamylin
    In MXML, when I add components to ViewStack and creationPolicy is auto, components are not instantiated until I switch to them. Say, I have the following code: <mx:ViewStack creationPolicy="auto"> <s:NavigatorContent> <s:DataGrid id="dg1" width="300"/> </s:NavigatorContent> <s:NavigatorContent> <s:DataGrid id="dg2" width="100"/> </s:NavigatorContent> </mx:ViewStack> How do I replicate this behavior in ActionScript? The problem is that my DataGrids hold large chunks of data, and thus I don't want them to be created at the same time.

    Read the article

  • Parsing a CSV File to a Rails Database

    - by Schroedinger
    G'day guys, I'm using fasterCSV and a rake script to parse a csv with about 30 columns into my rails db for a 'Trade' item. The script works fine when all of the values are set to strings, but when I change it to a decimal, int or other value, everything goes to hell. Wondering if fasterCSV has built in int etc parsing or whether I'll have to manage these within my model. Basically, I'm given a giant amount of trades data, need to import it, and then need to provide feedback with say the average trade volume, the times, etc. I understand I can do that all with the wonderful records provided to me by activeRecord but wondered if there was an easier way to populate a rather large Database with a given CSV? Several of the fields don't have values for certain rows, fasterCSV seems to work perfectly when they're all strings, but not when I try to get decimal or other.

    Read the article

  • Can SHA-1 algorithm be computed on a stream? With low memory footprint?

    - by raoulsson
    I am looking for a way to compute SHA-1 checksums of very large files without having to fully load them into memory at once. I don't know the details of the SHA-1 implementation and therefore would like to know if it is even possible to do that. If you know the SAX XML parser, then what I look for would be something similar: Computing the SHA-1 checksum by only always loading a small part into memory at a time. All the examples I found, at least in Java, always depend on fully loading the file/byte array/string into memory. If you even know implementations (any language), then please let me know!

    Read the article

  • WCF RIA Services v1.0 Timeouts

    - by Dan Wray
    I have a RIA service methods which return quite a large amount of (binary) data to my silverlight application. The methods themselves (database access etc) complete quickly, but since I upgraded to v1.0 from the RC2 I've been experiencing timeouts, even when I limit the data returned to the smallest objects. Debug shows the method returning, then 20-30 seconds later the silverlight "method complete" event will fire. There is not enough data to cause that much lag. Could anyone suggest why this might be? I've been unable to find any breaking changes documentation specific to the RC2 = 1.0 upgrade. Trying to eliminate any other possible causes also, but this is literally a case of worked before the install, doesn't work now! (also upgraded silverlight 4 tools for vs).

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >