Search Results

Search found 33477 results on 1340 pages for 'static vs non static'.

Page 40/1340 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Why a "private static" is not seen in a method?

    - by Roman
    I have a class with the following declaration of the fields: public class Game { private static String outputFileName; .... } I set the value of the outputFileName in the main method of the class. I also have a write method in the class which use the outputFileName. I always call write after main sets value for outputFileName. But write still does not see the value of the outputFileName. It say that it's equal to null. Could anybody, pleas, tell me what I am doing wrong? ADDED As it is requested I post more code: In the main: String outputFileName = userName + "_" + year + "_" + month + "_" + day + "_" + hour + "_" + minute + "_" + second + "_" + millis + ".txt"; f=new File(outputFileName); if(!f.exists()){ try { f.createNewFile(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } System.out.println("IN THE MAIN!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"); System.out.println("------>" + outputFileName + "<------"); This line outputs me the name of the file. Than in the write I have: public static void write(String output) { // Open a file for appending. System.out.println("==========>" + outputFileName + "<============"); ...... } And it outputs null.

    Read the article

  • How to get the first non-null value in Java?

    - by froadie
    Is there a Java equivalent of SQL's COALESCE function? That is, is there any way to return the first non-null value of several variables? e.g. Double a = null; Double b = 4.4; Double c = null; I want to somehow have a statement that will return the first non-null value of a, b, and c - in this case, it would return b, or 4.4. (Something like the sql method - return COALESCE(a,b,c)). I know that I can do it explicitly with something like: return a != null ? a : (b != null ? b : c) But I wondered if there was any built-in, accepted function to accomplish this.

    Read the article

  • How to force inclusion of an object file in a static library when linking into executable?

    - by Brian Bassett
    I have a C++ project that due to its directory structure is set up as a static library A, which is linked into shared library B, which is linked into executable C. (This is a cross-platform project using CMake, so on Windows we get A.lib, B.dll, and C.exe, and on Linux we get libA.a, libB.so, and C.) Library A has an init function (A_init, defined in A/initA.cpp), that is called from library B's init function (B_init, defined in B/initB.cpp), which is called from C's main. Thus, when linking B, A_init (and all symbols defined in initA.cpp) is linked into B (which is our desired behavior). The problem comes in that the A library also defines a function (Af, defined in A/Afort.f) that is intended to by dynamically loaded (i.e. LoadLibrary/GetProcAddress on Windows and dlopen/dlsym on Linux). Since there are no references to Af from library B, symbols from A/Afort.o are not included into B. On Windows, we can artifically create a reference by using the pragma: #pragma comment (linker, "/export:_Af") Since this is a pragma, it only works on Windows (using Visual Studio 2008). To get it working on Linux, we've tried adding the following to A/initA.cpp: extern void Af(void); static void (*Af_fp)(void) = &Af; This does not cause the symbol Af to be included in the final link of B. How can we force the symbol Af to be linked into B?

    Read the article

  • How to run a command on a remote Windows system as a non-admin user with WMI?

    - by John
    I have a script written in Visual Basic that starts a process (given to the script as an argument) on a remote system (again, given as an argument) using WMI. This script works fine when using an Administrator account on the remote system, but when using a non-administrator account, I get the following error: ConnectServer Failed w/ (-2147024891) Access is denied. I'd like to be able to run processes on remote systems as a non-administrator user with this script, and I'm pretty sure the problem is due to security settings on the remote system, but I've not been able to reset the right ones.

    Read the article

  • Static variable definition order in c++

    - by rafeeq
    Hi i have a class tools which has static variable std::vector m_tools. Can i insert the values into the static variable from Global scope of other classes defined in other files. Example: tools.h File class Tools { public: static std::vector<std::vector> m_tools; void print() { for(int i=0 i< m_tools.size() ; i++) std::cout<<"Tools initialized :"<< m_tools[i]; } } tools.cpp File std::vector<std::vector> Tools::m_tools; //Definition Using register class constructor for inserting the new string into static variable. class Register { public: Register(std::string str) { Tools::m_tools.pushback(str); } }; Different class which inserts the string to static variable in static variable first_tool.cpp //Global scope declare global register variable Register a("first_tool"); //////// second_tool.cpp //Global scope declare global register variable Register a("second_tool"); Main.cpp void main() { Tools abc; abc.print(); } Will this work? In the above example on only one string is getting inserted in to the static list. Problem look like "in Global scope it tries to insert the element before the definition is done" Please let me know is there any way to set the static definiton priority? Or is there any alternative way of doing the same.

    Read the article

  • SQLAuthority News – Whitepaper – SQL Azure vs. SQL Server

    - by pinaldave
    SQL Server and SQL Azure are two Microsoft Products which goes almost together. There are plenty of misconceptions about SQL Azure. I have seen enough developers not planning for SQL Azure because they are not sure what exactly they are getting into. Some are confused thinking Azure is not powerful enough. I disagree and strongly urge all of you to read following white paper written and published by Microsoft. SQL Azure vs. SQL Server by Dinakar Nethi, Niraj Nagrani SQL Azure Database is a cloud-based relational database service from Microsoft. SQL Azure provides relational database functionality as a utility service. Cloud-based database solutions such as SQL Azure can provide many benefits, including rapid provisioning, cost-effective scalability, high availability, and reduced management overhead. This paper compares SQL Azure Database with SQL Server in terms of logical administration vs. physical administration, provisioning, Transact-SQL support, data storage, SSIS, along with other features and capabilities. The content of this white paper is as following: Similarities and Differences Logical Administration vs. Physical Administration Provisioning Transact-SQL Support Features and Types Key Benefits of the Service Self-Managing High Availability Scalability Familiar Development Model Relational Data Model The above summary text is taken from white paper itself. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology Tagged: SQL Azure

    Read the article

  • difference between Casini [IIS express] and VS Development server or Expression web

    - by anirudha
    MVC3 project can be run within Expression web and Visual studio as opened like a website not a project. they work same even if you open blogengine.net project in VS they take a time when you have more theme but expression web debug them in a second. well because theme design not matter for code. Expression web is a good because they save time for compile the code. even changes we make a little in design nothing in backend code.   i found a little difference between Casini and VS development server that if image putted in wrong way like <img src=”//img.png”/> instead of <img src=”/img.png”/> the error we make // instead of / that’s not worked in Expression web or Visual studio debugging but in Cassini it’s work fine.   Well i found that debug Blogengine.net in Expression web is a great thing because in VS they take a time like a minute to debug when you trying to debug first time. Expression Web save a time when we design themes within them and that’s much good option because web is also maked for design.   Well if you want to debug application faster then use casini but Expression web debugging is a good option when they take a long time to debug in Visual studio and EW debug them in a seconds.

    Read the article

  • .NET vs Windows 8: Rematch!

    - by Simon Cooper
    So, although you will be able to use your existing .NET skills to develop Metro apps, it turns out Microsoft are limiting Visual Studio 2011 Express to Metro-only. From the Express website: Visual Studio 11 Express for Windows 8 provides tools for Metro style app development. To create desktop apps, you need to use Visual Studio 11 Professional, or higher. Oh dear. To develop any sort of non-Metro application, you will need to pay for at least VS Professional. I suspect Microsoft (or at least, certain groups within Microsoft) have a very explicit strategy in mind. By making VS Express Metro-only, developers who don't want to pay for Professional will be forced to make their simple one-shot or open-source application in Metro. This increases the number of applications available for Windows 8 and Windows mobile devices, which in turn make those platforms more attractive for consumers. When you use the free VS 11 Express, instead of paying Microsoft, you provide them a service by making applications for Metro, which in turn makes Microsoft's mobile offering more attractive to consumers, increasing their market share. Of course, it remains to be seen if developers forced to jump onto the Metro bandwagon will simply jump ship to Android or iOS instead. At least, that's what I think is going on. With Microsoft, who really knows?

    Read the article

  • .NET vs Windows 8: Rematch!

    - by simonc
    So, although you will be able to use your existing .NET skills to develop Metro apps, it turns out Microsoft are limiting Visual Studio 2011 Express to Metro-only. From the Express website: Visual Studio 11 Express for Windows 8 provides tools for Metro style app development. To create desktop apps, you need to use Visual Studio 11 Professional, or higher. Oh dear. To develop any sort of non-Metro application, you will need to pay for at least VS Professional. I suspect Microsoft (or at least, certain groups within Microsoft) have a very explicit strategy in mind. By making VS Express Metro-only, developers who don't want to pay for Professional will be forced to make their simple one-shot or open-source application in Metro. This increases the number of applications available for Windows 8 and Windows mobile devices, which in turn make those platforms more attractive for consumers. When you use the free VS 11 Express, instead of paying Microsoft, you provide them a service by making applications for Metro, which in turn makes Microsoft's mobile offering more attractive to consumers, increasing their market share. Of course, it remains to be seen if developers forced to jump onto the Metro bandwagon will simply jump ship to Android or iOS instead. At least, that's what I think is going on. With Microsoft, who really knows? Cross posted from Simple Talk.

    Read the article

  • .NET vs Windows 8: Rematch!

    - by Simon Cooper
    So, although you will be able to use your existing .NET skills to develop Metro apps, it turns out Microsoft are limiting Visual Studio 2011 Express to Metro-only. From the Express website: Visual Studio 11 Express for Windows 8 provides tools for Metro style app development. To create desktop apps, you need to use Visual Studio 11 Professional, or higher. Oh dear. To develop any sort of non-Metro application, you will need to pay for at least VS Professional. I suspect Microsoft (or at least, certain groups within Microsoft) have a very explicit strategy in mind. By making VS Express Metro-only, developers who don't want to pay for Professional will be forced to make their simple one-shot or open-source application in Metro. This increases the number of applications available for Windows 8 and Windows mobile devices, which in turn make those platforms more attractive for consumers. When you use the free VS 11 Express, instead of paying Microsoft, you provide them a service by making applications for Metro, which in turn makes Microsoft's mobile offering more attractive to consumers, increasing their market share. Of course, it remains to be seen if developers forced to jump onto the Metro bandwagon will simply jump ship to Android or iOS instead. At least, that's what I think is going on. With Microsoft, who really knows?

    Read the article

  • GPU Debugging with VS 11

    - by Daniel Moth
    With VS 11 Developer Preview we have invested tremendously in parallel debugging for both CPU (managed and native) and GPU debugging. I'll be doing a whole bunch of blog posts on those topics, and in this post I just wanted to get people started with GPU debugging, i.e. with debugging C++ AMP code. First I invite you to watch 6 minutes of a glimpse of the C++ AMP debugging experience though this video (ffw to minute 51:54, up until minute 59:16). Don't read the rest of this post, just go watch that video, ideally download the High Quality WMV. Summary GPU debugging essentially means debugging the lambda that you pass to the parallel_for_each call (plus any functions you call from the lambda, of course). CPU debugging means debugging all the code above and below the parallel_for_each call, i.e. all the code except the restrict(direct3d) lambda and the functions that it calls. With VS 11 you have to choose what debugger you want to use for a particular debugging session, CPU or GPU. So you can place breakpoints all over your code, then choose what debugger you want (CPU or GPU), and you'll only be able to hit breakpoints for the code type that the debugger engine understands – the remaining breakpoints will appear as unbound. If you want to hit the unbound breakpoints, you'd have to stop debugging, and start again with the other debugger. Sorry. We suck. We know. But once you are past that limitation, I think you'll find the experience truly rewarding – seriously! Switching debugger engines With the Developer Preview bits, one way to switch the debugger engine is through the project properties – see the screenshots that follow. This one is showing the CPU option selected, which is basically the default that you are all familiar with: This screenshot is showing the GPU option selected, by changing the debugger launcher (notice that this applies for both the local and remote case): You actually do not have to open the project properties just for switching the debugger engine, you can switch the selection from the toolbar in VS 11 Developer Preview too – see following screenshot (the effect is the same as if you opened the project properties and switched there) Breakpoint behavior Here are two screenshots, one showing a debugging session for CPU and the other a debugging session for GPU (notice the unbound breakpoints in each case) …and here is the GPU case (where we cannot bind the CPU breakpoints but can the GPU breakpoint, which is actually hit) Give C++ AMP debugging a try So to debug your C++ AMP code, pull down the drop down under the 'play' button to select the 'GPU C++ Direct3D Compute Debugger' menu option, then hit F5 (or the 'play' button itself). Then you can explore debugging by exploring the menus under the Debug and under the Debug->Windows menus. One way to do that exploration is through the C++ AMP debugging walkthrough on MSDN. Another way to explore the C++ AMP debugging experience, you can use the moth.cpp code file, which is what I used in my BUILD session debugger demo. Note that for my demo I was using the latest internal VS11 bits, so your experience with the Developer Preview bits won't be identical to what you saw me demonstrate, but it shouldn't be far off. Stay tuned for a lot more content on the parallel debugger in VS 11, both CPU and GPU, both managed and native. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Non-Blocking I/O Made Possible in Java

    Java SE7 "Dolphin" release is nearing and we're chomping at the bit. So let's dig in and review non-blocking IO, a feature of java.nio (New I/O) package that is a part of Java v1.4, v1.5 and v1.6 and we'll also take a peek at the java.nio.file (NIO.2) package.

    Read the article

  • Non-Blocking I/O Made Possible in Java

    Java SE7 "Dolphin" release is nearing and we're chomping at the bit. So let's dig in and review non-blocking IO, a feature of java.nio (New I/O) package that is a part of Java v1.4, v1.5 and v1.6 and we'll also take a peek at the java.nio.file (NIO.2) package.

    Read the article

  • An end to the static &hellip;

    - by Dave Oliver
    Last October I learnt my company wanted to put together a new blog/social networking policy. I decided that out of respect for my employer I wouldn’t blog until this was sorted out. This was perhaps was an easy decision to make as I was separating from my ex-wife at the time and frankly needed the time to concentrate on other things. So now the company has a brand new policy and I’m back into the dating game I thought I would blow off the cobwebs and get back to what I enjoy doing. First and foremost SQL Server 2008 R2 is almost here and to mark that fact I will be in London on Thursday at the Microsoft UK Tech-Day’s event. The subjects I most want to see are … Power Pivot – this is such an exciting technology! I’ve been a fan of Qlikview for years so it will be good to see how it compares SQL Azure – Cloud Computing is big right now, so it will be interesting to see what the RTM product can do. I have afew ideas for its use and will be interesting to see if SQL Azure is the right product … more on this in the next few weeks. Master Data Services – This is one of those technologies that Microsoft hasn’t been making much noise about … and frankly should have because it is a game changer. Hmmm, queue future “What is … ?” post StreamInsight – An exciting events technology, again another “What is … ?” post is around the corner on that. So, you thought that SQL Server 2008 R2 was just a release to make sure the years between SQL Server 2008 and SQL Server 2010 weren’t so long? I am however disappointed that Clustering across Subnets didn’t make it and not sure if Control Points made it but all will be revealed later on this week. Till then I will have to wait! Technorati Tags: Microsoft,Techdays,SQL Server 2008 R2

    Read the article

  • Fastest pathfinding for static node matrix

    - by Sean Martin
    I'm programming a route finding routine in VB.NET for an online game I play, and I'm searching for the fastest route finding algorithm for my map type. The game takes place in space, with thousands of solar systems connected by jump gates. The game devs have provided a DB dump containing a list of every system and the systems it can jump to. The map isn't quite a node tree, since some branches can jump to other branches - more of a matrix. What I need is a fast pathfinding algorithm. I have already implemented an A* routine and a Dijkstra's, both find the best path but are too slow for my purposes - a search that considers about 5000 nodes takes over 20 seconds to compute. A similar program on a website can do the same search in less than a second. This website claims to use D*, which I have looked into. That algorithm seems more appropriate for dynamic maps rather than one that does not change - unless I misunderstand it's premise. So is there something faster I can use for a map that is not your typical tile/polygon base? GBFS? Perhaps a DFS? Or have I likely got some problem with my A* - maybe poorly chosen heuristics or movement cost? Currently my movement cost is the length of the jump (the DB dump has solar system coordinates as well), and the heuristic is a quick euclidean calculation from the node to the goal. In case anyone has some optimizations for my A*, here is the routine that consumes about 60% of my processing time, according to my profiler. The coordinateData table contains a list of every system's coordinates, and neighborNode.distance is the distance of the jump. Private Function findDistance(ByVal startSystem As Integer, ByVal endSystem As Integer) As Integer 'hCount += 1 'If hCount Mod 0 = 0 Then 'Return hCache 'End If 'Initialize variables to be filled Dim x1, x2, y1, y2, z1, z2 As Integer 'LINQ queries for solar system data Dim systemFromData = From result In jumpDataDB.coordinateDatas Where result.systemId = startSystem Select result.x, result.y, result.z Dim systemToData = From result In jumpDataDB.coordinateDatas Where result.systemId = endSystem Select result.x, result.y, result.z 'LINQ execute 'Fill variables with solar system data for from and to system For Each solarSystem In systemFromData x1 = (solarSystem.x) y1 = (solarSystem.y) z1 = (solarSystem.z) Next For Each solarSystem In systemToData x2 = (solarSystem.x) y2 = (solarSystem.y) z2 = (solarSystem.z) Next Dim x3 = Math.Abs(x1 - x2) Dim y3 = Math.Abs(y1 - y2) Dim z3 = Math.Abs(z1 - z2) 'Calculate distance and round 'Dim distance = Math.Round(Math.Sqrt(Math.Abs((x1 - x2) ^ 2) + Math.Abs((y1 - y2) ^ 2) + Math.Abs((z1 - z2) ^ 2))) Dim distance = firstConstant * Math.Min(secondConstant * (x3 + y3 + z3), Math.Max(x3, Math.Max(y3, z3))) 'Dim distance = Math.Abs(x1 - x2) + Math.Abs(z1 - z2) + Math.Abs(y1 - y2) 'hCache = distance Return distance End Function And the main loop, the other 30% 'Begin search While openList.Count() != 0 'Set current system and move node to closed currentNode = lowestF() move(currentNode.id) For Each neighborNode In neighborNodes If Not onList(neighborNode.toSystem, 0) Then If Not onList(neighborNode.toSystem, 1) Then Dim newNode As New nodeData() newNode.id = neighborNode.toSystem newNode.parent = currentNode.id newNode.g = currentNode.g + neighborNode.distance newNode.h = findDistance(newNode.id, endSystem) newNode.f = newNode.g + newNode.h newNode.security = neighborNode.security openList.Add(newNode) shortOpenList(OLindex) = newNode.id OLindex += 1 Else Dim proposedG As Integer = currentNode.g + neighborNode.distance If proposedG < gValue(neighborNode.toSystem) Then changeParent(neighborNode.toSystem, currentNode.id, proposedG) End If End If End If Next 'Check to see if done If currentNode.id = endSystem Then Exit While End If End While If clarification is needed on my spaghetti code, I'll try to explain.

    Read the article

  • Microphone - static background noise suppression

    - by user1873947
    My soundcard is Realtek ALC 892. On Windows 7 I use official Realtek drivers, on Linux I use PulseAudio (on Ubuntu 13.10). On both Windows and Linux, when I enable microphone boost +30db (required because my microphone is quiet), I get very annoying and loud background noise (I also confirmed the background noise with Audacity on both systems). However, Windows Realtek drivers have noise suppression option which works (after enabling it, Audacity shows no background noise and my ears also confirm that there is no background noise). My question is how can I enable background noise suppression in ALSA/PulseAudio? Is there any module I can install or maybe there is a setting for it that can be enabled in config file? I can't find solution for it and this is the only thing that prevents me from switching to Linux completely - as I talk using microphone a lot and on Windows the Realtek software removes the background noise completely and PulseAudio doesn't remove it, which means the recorded voice on Linux is very bad. I know I could buy better soundcard and microphone, but as I said, Windows Realtek drivers remove the noise on software level in real time (ie no noise when talking on TeamSpeak3/Steam/whatever voip programme) so I hope that there is such option on Linux as well. Thanks in advance! This is also crossposted on Unix StackExchange

    Read the article

  • Non-English Character Display in Oracle SQL Developer

    - by thatjeffsmith
    I get a variation on this question at least once a week, if not more frequently. I’m from Israel, and the language on the databases is Hebrew. When I use the old and deprecated SQL*Plus (windows rich client) I can see the hebrew clearly, when I use the latest SQL Developer, I get gibberish. This question appears on the forums about every week or so as well. So what’s the deal? Well, it starts with a basic misunderstanding of NLS Client parameters. These should accurately reflect the language and locality setup on your LOCAL machine. DO NOT COPY what’s set in the database. The these parameters work together with the database so that information can be transferred back and forth correctly. Having the wrong NLS parameters locally can be bad. [ORACLE DOCS]Setting the NLS_LANG parameter properly is essential to proper data conversion. The character set that is specified by the NLS_LANG parameter should reflect the setting for the client operating system. Setting NLS_LANG correctly enables proper conversion from the client operating system character encoding to the database character set. When these settings are the same, Oracle Database assumes that the data being sent or received is encoded in the same character set as the database character set, so character set validation or conversion may not be performed. This can lead to corrupt data if conversions are necessary. OK, so what are you supposed to do? Set the Font! 9 times out of 10, this preference fixes the problem with display issues. Make sure you set a Font that supports the characters you’re trying to display. It’s as simple as that. This preference defines the font used to display characters in the editors and the data grids. If you have it set to a font that doesn’t have Hebrew character support – you’re not going to see Hebrew in SQL Developer. A few years ago…wow, like 15 years ago, I learned that the Tohama Font is pretty Unicode-friendly. Bad Font Selection A Font that’s not non-English friendly Good Font Selection Exact same text, except rendered with the Tahoma font Summary Having problems seeing non-English text in SQL Developer? Check the font! And do not start messing with NLS parameters without talking to your DBA first.

    Read the article

  • GPL vs plugin interfaces not designed with a specific application in mind

    - by Kristóf Marussy
    I am not seeking or in need of legal advice, but an interesting though experiment came to my mind. Imagine the following situtation (I cannot really think about a concrete example and I am unsure if a real manifestation even exists): there is a free (libre) api A licensed under some permissive license or even LGPL. Non-free application B implements this api in order host plugins, but there are other free software doing the same thing. Moreover, there is plugin C acting as a plugin under api A. It links to library D, that is under GPL, so C is also under GPL. Plugins using A are loaded into hosts via a dlopen-like mechanism and use complex data structure for host-plugin communication. Neither B nor C distribute any files that may be required for A to function properly (like headers containing the structure definitions of A or dynamic libraries containing helper functions for A written by the authors of A), but such things may exist. Now some user installs application B and plugin C on his machine, along with anything that may be required for api A to function properly. Then he proceeds and loads C into B and creates some intellectual property with B which is not a piece of software. Did a GPL violation happend at some point, and if so, who violated GPL and why? The authors of C violate D's license by making C possible to be used in non-free host B? This is a possibility because they can't give and exception of GPL (like one described in http://www.gnu.org/licenses/gpl-faq.html#GPLPluginsInNF or http://www.gnu.org/licenses/gpl-faq.html#LinkingOverControlledInterface) due to D's license terms. The authors of B violate C's and D's license by making C possible to be loaded in B? This is a possibility because http://www.gnu.org/licenses/gpl-faq.html#NFUseGPLPlugins disallows the mechanisms A uses for communitation between the free and non-free modules. The authors of A, because the api may be used (and in this case, was used) for communication between GPL'd and non-free software. This would be extremely absurd. The user, because at the moment of loading B into C, he made a derived work of C. I think this is impossible, because he does not distribute it. But would the situation change is he decided to release a configuration file of B which makes B load C as a plugin? Nobody, because A counts as a 'system library', and both B and C directly interact only with A, not eachother. In a sane world, this would happen... A concrete example of A could be some kind of audio (think LADSPA) or image processing api. However, I could find no such interface (that is free software, generic and is also implemented by commercial tools). A real-world example could also be quite enlightening.

    Read the article

  • What non-programming tools do programmers use?

    - by user828584
    I'm reading code complete with the intention of learning how to better structure my code, but I'm also learning a lot about how many aspects of programming something there are that aren't just writing the code. The book talks a lot about problem definition, determining the requirements, defining the structure, designing the code, etc. What tools are used for these non-writing steps of programming? Is there software that will help me design and plan out what I'm going to write before I do?

    Read the article

  • How to explain to a layperson the variance in programmer rates?

    - by Matt McCormick
    I recently talked to a guy that is looking for developers to build a product idea. He mentioned he has received interest from people but the rates have varied from $20-120/hr. This project he estimates should take 3-6 months and since he is non-technical, he is confused why there can be so much variance. I understand how I would choose someone but I am a developer and can gauge other people's work. How can I explain to him (in a non-biased way, if possible, as I will apply as well) about the variance in rates? Is there any good analogy that would help?

    Read the article

  • How do I mount Samba share as non-root user

    - by Android Eve
    Is there a step-by-step tutorial that instructs in detailed step-by-step how to smbmount a Samba share to be used by a non-root user on a Ubuntu 10.04 desktop? Note: there are numerous threads on Google search dealing with this seemingly new problem. Instructions that used to work on Ubuntu 8.04 (or an older version of smbfs) no longer work. I need something fresh, punctual and especially reproducible. Thanks.

    Read the article

  • Static "LoD" hack opinions

    - by David Lively
    I've been playing with implementing dynamic level of detail for rendering a very large mesh in XNA. It occurred to me that (duh) the whole point of this is to generate small triangles close to the camera, and larger ones far away. Given that, rather than constantly modifying or swapping index buffers based on a feature's rendered size or distance from the camera, it would be a lot easier (and potentially quite a bit faster), to render a single "fan" or flat wedge/frustum-shaped planar mesh that is tessellated into small triangles close to the near or small end of the frustum and larger ones at the far end, sort of like this (overhead view) (Pardon the gap in the middle - I drew one side and mirrored it) The triangle sizes are chosen so that all are approximately the same size when projected. Then, that mesh would be transformed to track the camera so that the Z axis (center vertical in this image) is always aligned with the view direction projected into the XZ plane. The vertex shader would then read terrain heights from a height texture and adjust the Y coordinate of the mesh to match a height field that defines the terrain. This eliminates the need for culling (since the mesh is generated to match the viewport dimensions) and the need to modify the index and/or vertex buffers when drawing the terrain. Obviously this doesn't address terrain with overhangs, etc, but that could be handled to a certain extent by including a second mesh that defines a sort of "ceiling" via a different texture. The other LoD schemes I've seen aren't particularly difficult to implement and, in some cases, are a lot more flexible, but this seemed like a decent quick-and-dirty way to handle height map-based terrain without getting into geometry manipulation. Has anyone tried this? Opinions?

    Read the article

  • What dangers await if I block non-standard, non-major-usa search engine bots from my USA only website?

    - by Ryan
    I noticed tons of bandwidth being used by non-USA search engine bots, so I began blocking them in an effort to save bandwidth and cpu cycles for actual users and the search engines they come from (Google, Bing, Yahoo, Ask, etc.). Other than potentially losing some international traffic (which isn't really important to us since all of our content is very USA-centric), what additional dangers should I be concerned about? I'm using a modified version of Jeff Starr's User Agent Blocklist

    Read the article

  • Static teams or dynamic teams?

    - by Richard DesLonde
    Is it better to assemble permanent teams of developers within the company that always work together, from project to project, or is it better to have dynamic teams that assemble for a project, and then dissasemble afterwards? My inclination is to treat the entire company as a "platoon" and to assemble "fireteams" for individual projects, choosing from the "platoon" those developers best suited for the project.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >