Search Results

Search found 71953 results on 2879 pages for 'work environment'.

Page 18/2879 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Tips To Manage An Effectively Come Back To Work After A Long Vacation

    - by Gopinath
    Vacations are very relaxing – no need to reply to endless mails, no marathon meeting or conference calls. It’s all about fun during the vacation. The troubles begin as you near the end of vacation and plans to think about getting back to work. Once we are back to work after a long vacation there will be many things to worry – a pile of snail mails, hundreds of unread emails,  a flood of phone calls to answer and a stream of scheduled meetings. How to handle all the backlog and catch up quickly with the inflow of work? Here is a management tip from Harvard Business Review blog to get back to work the right way after a long vacation Block off your morning. Make sure you don’t have any meetings scheduled or big projects due. Then before you open your inbox, pause and think about your work priorities. As you make your way through emails and voicemails, focus on returning the messages that are connected to what matters most. Defer or delegate things that aren’t top priority. And remember it will probably take more than one day to get caught up, so be easy on yourself. Hope these tips lets you plan a right comeback to work after your vacation. cc Image credit: flickr/dfwcre8tive This article titled,Tips To Manage An Effectively Come Back To Work After A Long Vacation, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • At what point does "constructive" criticism of your code become unhelpful?

    - by user15859
    I recently started as a junior developer. As well as being one of the least experienced people on the team, I'm also a woman, which comes with all sorts of its own challenges working in a male-dominated environment. I've been having problems lately because I feel like I am getting too much unwarranted pedantic criticism on my work. Let me give you an example of what happened recently. Team lead was too busy to push in some branches I made, so he didn't get to them until the weekend. I checked my mail, not really meaning to do any work, and found that my two branches had been rejected on the basis of variable names, making error messages more descriptive, and moving some values to the config file. I don't feel that rejecting my branch on this basis is useful. Lots of people were working over the weekend, and I had never said that I would be working. Effectively, some people were probably blocked because I didn't have time to make the changes and resubmit. We are working on a project that is very time-sensitive, and it seems to me that it's not helpful to outright reject code based on things that are transparent to the client. I may be wrong, but it seems like these kinds of things should be handled in patch type commits when I have time. Now, I can see that in some environments, this would be the norm. However, the criticism doesn't seem equally distributed, which is what leads to my next problem. The basis of most of these problems was due to the fact that I was in a codebase that someone else had written and was trying to be minimally invasive. I was mimicking the variable names used elsewhere in the file. When I stated this, I was bluntly told, "Don't mimic others, just do what's right." This is perhaps the least useful thing I could have been told. If the code that is already checked in is unacceptable, how am I supposed to tell what is right and what is wrong? If the basis of the confusion was coming from the underlying code, I don't think it's my responsibility to spend hours refactoring a whole file that someone else wrote (and works perfectly well), potentially introducing new bugs etc. I'm feeling really singled out and frustrated in this situation. I've gotten a lot better about following the standards that are expected, and I feel frustrated that, for example, when I refactor a piece of code to ADD error checking that was previously missing, I'm only told that I didn't make the errors verbose enough (and the branch was rejected on this basis). What if I had never added it to begin with? How did it get into the code to begin with if it was so wrong? This is why I feel so singled out: I constantly run into this existing problematic code, that I either mimic or refactor. When I mimic it, it's "wrong", and if I refactor it, I'm chided for not doing enough (and if I go all the way, introducing bugs, etc). Again, if this is such a problem, I don't understand how any code gets into the codebase, and why it becomes my responsibility when it was written by someone else, who apparently didn't have their code reviewed. Anyway, how do I deal with this? Please remember that I said at the top that I'm a woman, and I'm sure these guys don't usually have to worry about decorum when they're reviewing other guys' code, but honestly that doesn't work for me, and it's causing me to be less productive. I'm worried that if I talk to my manager about it, he'll think I can't handled the environment, etc.

    Read the article

  • How do I implement repository pattern and unit of work when dealing with multiple data stores?

    - by Jason
    I have a unique situation where I am building a DDD based system that needs to access both Active Directory and a SQL database as persistence. Initially this wasnt a problem because our design was setup where we had a unit of work that looked like this: public interface IUnitOfWork { void BeginTransaction() void Commit() } and our repositories looked like this: public interface IRepository<T> { T GetByID() void Save(T entity) void Delete(T entity) } In this setup our load and save would handle the mapping between both data stores because we wrote it ourselves. The unit of work would handle transactions and would contain the Linq To SQL data context that the repositories would use for persistence. The active directory part was handled by a domain service implemented in infrastructure and consumed by the repositories in each Save() method. Save() was responsible with interacting with the data context to do all the database operations. Now we are trying to adapt it to entity framework and take advantage of POCO. Ideally we would not need the Save() method because the domain objects are being tracked by the object context and we would just need to add a Save() method on the unit of work to have the object context save the changes, and a way to register new objects with the context. The new proposed design looks more like this: public interface IUnitOfWork { void BeginTransaction() void Save() void Commit() } public interface IRepository<T> { T GetByID() void Add(T entity) void Delete(T entity) } This solves the data access problem with entity framework, but does not solve the problem with our active directory integration. Before, it was in the Save() method on the repository, but now it has no home. The unit of work knows nothing other than the entity framework data context. Where should this logic go? I argue this design only works if you only have one data store using entity framework. Any ideas how to best approach this issue? Where should I put this logic?

    Read the article

  • Wordpress transfer to new ftp. Help, Home link doesn't work?

    - by judi
    Hi I've transfered my wordpress site to a new ftp server, but my home link doesn't work properly. When I click on it, it goes to http://123.456.78.8/mydomain.com and I get a page not found message. I've discovered it needs a / at the end to work. Does anyone know a way to fix this before I put it on my live site? Could it be a database or config file issue? Thanks for all your help Regards Judi P.S Could it be the permalink structure? Will it work when change my domain to http://mydomain.com?

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

  • Best development architecture for a small team of programmers

    - by Tio
    Hi all.. I'm in the first month of work in a new company.. and after I met the two programmer's and asked how things are organized in terms of projects inside the company, they simply shrug their shoulders, and said that nothing is organized.. I think my jaw hit the ground that same time.. ( I know some, of you think I should quit, but I'm on a privileged position, I'm the most experienced there, so there's room for me to grow inside the company, and I'm taking the high road ).. So I talked to the IT guy, and one of the programmers, and maybe this week I'm going to get a server all to myself to start organizing things. I've used various architectures in my previous work experiences, on one I was developing in a server on the network ( no source control of course ).. another experience I had was developing in my local computer, with no server on the network, just source control. And at home, I have a mix of the two, everything I code is on a server on the network, and I have those folders under source control, and I also have a no-ip account configured on that server so I can access it everywhere and I can show the clients anything. For me I think this last solution ( the one I have at home ) is the best: Network server with LAMP stack. The server as a public IP so we can access it by domain name. And use subdomains for each project. Everybody works directly on the network server. I think the problem arises, when two or more people want to work on the same project, in this case the only way to do this is by using source control and local repositories, this is great, but I think this turns development a lot more complicated. In the example I gave, to make a change to the code, I would simply need to open the file in my favorite editor, make the change, alter the database, check in the changes into source control and presto all done. Using local repositories, I would have to get the latest version, run the scripts on the local database to update it, alter the file, alter the database, check in the changes to the network server, update the database on the network server, see if everything is running well on the network server, and presto all done, to me this seems overcomplicated for a change on a simple php page. I could share the database for the local development and for the network server, that sure would help. Maybe the best way to do this is just simply: Network server with LAMP stack ( test server so to speak ), public server accessible trough the web. LAMP stack on every developer computer ( minus the database ) We develop locally, test, then check in the changes into the server test and presto. What do you think? Maybe I should start doing this at home.. Thanks and best regards...

    Read the article

  • Need to make a scheduled task run as another user but keep the current user’s environment

    - by Chad Marmon
    I need to backup users .pst files. The current method I am trying is making a shadow copy using Diskshadow. My script works great all but Diskshadow needs to be ran as administrator but also needs to retain the logged-on user's environment variables; specifically, the %USERNAME% and %HOMESHARE% variables so the right user’s files get copied up to the right network location. I have for the most part got this to work), but there’s no straightforward (or secure, at least) way to pass the password. If I set up a scheduled task to run the script as a domain user with local admin privs, the environment variables get lost. I need to run this script automagically so that there should be no user interaction. If I could figure out how to make a scheduled task run as another user but keep the current user’s environment, I think this would work, but I’ve been beating my head against that for a while now, without any luck.

    Read the article

  • Making Use of Plan Explorer in my own Environment

    - by Jonathan Kehayias
    Back in October 2010, I briefly blogged about the SQL Sentry Plan Explorer in my blog post wrap up for SQL Bits 7 and how impressed I was with what I saw from a Alpha demo standpoint from Greg Gonzalez ( Blog | Twitter ) while I was at SQLBits 7 in York.  To be 100% honest and transparent, Greg gave me early access to this tool after discussing it at SQLBits 7, and I had the opportunity to test a number of pre-Beta releases where I was able to offer significant feedback and submit bugs in the...(read more)

    Read the article

  • Choosing the Right JDeveloper Release for Your EBS Environment

    - by Sara Woodhull
    Oracle E-Business Suite developers use a special build of Oracle JDeveloper. This build contains the correct Oracle Application Framework (OA Framework or OAF) libraries corresponding to a specific version of Oracle E-Business Suite (specifically, to an ATG patch level). For customers and developers who are building OA Framework components and extensions to Oracle E-Business Suite, one of the first questions is "How do I find the right version of JDeveloper?"Oracle makes these OA Framework/JDeveloper builds available in separate patches when a new ATG patch level is released.   A handy My Oracle Support Document shows the ATG patch levels and the corresponding patch containing the correct version of JDeveloper with the right versions of OA Framework libraries:How to find the correct version of JDeveloper to use with eBusiness Suite 11i or Release 12.x (Doc ID 416708.1)

    Read the article

  • The DOS DEBUG Environment

    - by MarkPearl
    Today I thought I would go back in time and have a look at the DEBUG command that has been available since the beginning of dawn in DOS, MS-DOS and Microsoft Windows. up to today I always knew it was there, but had no clue on how to use it so for those that are interested this might be a great geek party trick to pull out when you want the awe the younger generation and want to show them what “real” programming is about. But wait, you will have to do it relatively quickly as it seems like DEBUG was finally dumped from the Windows group in Windows 7. Not to worry, pull out that Windows XP box which will get you even more geek points and you can still poke DEBUG a bit. So, for those that are interested and want to find out a bit about the history of DEBUG read the wiki link here. That all put aside, lets get our hands dirty.. How to Start DEBUG in Windows Make sure your version of Windows supports DEBUG. Open up a console window Make a directory where you want to play with debug – in my instance I called it C221 Enter the directory and type Debug You will get a response with a – as illustrated in the image below…   The commands available in DEBUG There are several commands available in DEBUG. The most common ones are A (Assemble) R (Register) T (Trace) G (Go) D (Dump or Display) U (Unassemble) E (Enter) P (Proceed) N (Name) L (Load) W (Write) H (Hexadecimal) I (Input) O (Output) Q (Quit) I am not going to cover all these commands, but what I will do is go through a few of them briefly. A is for Assemble Command (to write code) The A command translates assembly language statements into machine code. It is quite useful for writing small assembly programs. Below I have written a very basic assembly program. The code typed out is as follows mov ax,0015 mov cx,0023 sub cx,ax mov [120],al mov cl,[120]A nop R is for Register (to jump to a point in memory) The r command turns out to be one of the most frequent commands you will use in DEBUG. It allows you to view the contents of registers and to change their values. It can be used with the following combinations… R – Displays the contents of all the registers R f – Displays the flags register R register_name – Displays the contents of a specific register All three methods are illustrated in the image above T is for Trace (To execute a program step by step) The t command allows us to execute the program step by step. Before we can trace the program we need to point back to the beginning of the program. We do this by typing in r ip, which moves us back to memory point 100. We then type trace which executes the first line of code (line 100) (As shown in the image below starting from the red arrow). You can see from the above image that the register AX now contains 0015 as per our instruction mov ax,0015 You can also see that the IP points to line 0103 which has the MOV CX,0023 command If we type t again it will now execute the second line of the program which moves 23 in the cx register. Again, we can see that the line of code was executed and that the CX register now holds the value of 23. What I would like to highlight now is the section underlined in red. These are the status flags. The ones we are going to look at now are 1st (NV), 4th (PL), 5th (NZ) & 8th (NC) NV means no overflow, the alternate would be OV PL means that the sign of the previous arithmetic operation was Plus, the alternate would be NG (Negative) NZ means that the results of the previous arithmetic operation operation was Not Zero, the alternate would be ZR NC means that No final Carry resulted from the previous arithmetic operation. CY means that there was a final Carry. We could now follow this process of entering the t command until the entire program is executed line by line. G is for Go (To execute a program up to a certain line number) So we have looked at executing a program line by line, which is fine if your program is minuscule BUT totally unpractical if we have any decent sized program. A quicker way to run some lines of code is to use the G command. The ‘g’ command executes a program up to a certain specified point. It can be used in connection with the the reset IP command. You would set your initial point and then run the G command with the line you want to end on. P is for Proceed (Similar to trace but slightly more streamlined) Another command similar to trace is the proceed command. All that the p command does is if it is called and it encounters a CALL, INT or LOOP command it terminates the program execution. In the example below I modified our example program to include an int 20 at the end of it as illustrated in the image below… Then when executing the code when I encountered the int 20 command I typed the P command and the program terminated normally (illustrated below). D is for Dump (or for those more polite Display) So, we have all these assembly lines of code, but if you have ever opened up an exe or com file in a text/hex editor, it looks nothing like assembly code. The D command is a way that we can see what our code looks like in memory (or in a hex editor). If we examined the image above, we can see that Debug is storing our assembly code with each instruction following immediately after the previous one. For instance in memory address 110 we have int and 111 we have 20. If we examine the dump of memory we can see at memory point 110 CD is stored and at memory point 111 20 is stored. U is for Unassemble (or Convert Machine code to Assembly Code) So up to now we have gone through a bunch of commands, but probably one of the most useful is the U command. Let’s say we don’t understand machine code so well and so instead we want to see it in its equivalent assembly code. We can type the U command followed by the start memory point, followed by the end memory point and it will show us the assembly code equivalent of the machine code. E is for a bunch of things… The E command can be used for a bunch of things… One example is to enter data or machine code instructions directly into memory. It can also be used to display the contents of memory locations. I am not going to worry to much about it in this post. N / L / W is for Name, Load & Write So we have written out assembly code in debug, and now we want to save it to disk, or write it as a com file or load it. This is where the N, L & W command come in handy. The n command is used to give a name to the executable program file and is pretty simple to use. The w command is a bit trickier. It saves to disk all the memory between point bx and point cx so you need to specify the bx memory address and the cx memory address for it to write your code. Let’s look at an example illustrated below. You do this by calling the r command followed by the either bx or cx. We can then go to the directory where we were working and will see the new file with the name we specified. The L command is relatively simple. You would first specify the name of the file you would like to load using the N command, and then call the L command. Q is for Quit The last command that I am going to write about in this post is the Q command. Simply put, calling the Q command exits DEBUG. Commands we did not Cover Out of the standard DEBUG commands we covered A, T, G, D, U, E, P, R, N, L & W. The ones we did not cover were H, I & O – I might make mention of these in a later post, but for the basics they are not really needed. Some Useful Resources Please note this post is based on the COS2213 handouts for UNISA A Guide to DEBUG - http://mirror.href.com/thestarman/asm/debug/debug.htm#NT

    Read the article

  • Making Use of Plan Explorer in my own Environment

    - by Jonathan Kehayias
    Back in October 2010, I briefly blogged about the SQL Sentry Plan Explorer in my blog post wrap up for SQL Bits 7 and how impressed I was with what I saw from a Alpha demo standpoint from Greg Gonzalez ( Blog | Twitter ) while I was at SQLBits 7 in York.  To be 100% honest and transparent, Greg gave me early access to this tool after discussing it at SQLBits 7, and I had the opportunity to test a number of pre-Beta releases where I was able to offer significant feedback and submit bugs in the...(read more)

    Read the article

  • How to create Isolated test environment

    - by Safin09
    Hi All, I am very new to the VMware world. We have VMWare vCenter 4 in the production environment but we have created multiple VLAN through Cisco Switch. I want know, how I can create an isolated test environment for software testing purpose only, so anything will happen in that test vLan will not make trouble in the production environment. "Host-only networking" is the solution or there is a better way to achieve this result? My requirements A. Hosts should be able to access Internet and a Network Share drive but not Production network B. Hosts should connect each other inside the Virtual LAN C. I should be able to take automatic or periodic backup or snapshoot and deploy snapshot when necessary. Whatever your answer is, please give me steps, how to do, if possible. If I need to purchase anything, I am ready to do but I don't want to spend big money. Many thanks in advance.

    Read the article

  • Chess as a team building exercise for software developers

    - by maple_shaft
    The last place I worked wasn't a particularly great place and there were more than a few nights where we were working late into the evening trying to meet our sprints. The team while stressed out got pretty close and people started bringing in little mind teasers and puzzles, just something we would all play around with and try to solve while a build/deploy was running for the test environment, or while we were waiting for the integration test run to finish. Eventually it turned into people bringing chess boards in and setting them at their desks. We would play by email sending each other moves in chess notation, but at a very casual pace, with games lasting sometimes two or three days. Management tolerated this when we were putting in overtime, but as things were being managed better and people weren't working much more than 40/wk, they started cracking down on this and told us that we weren't allowed to have chess boards at our desks, although they were okay with the puzzle games. What are the pros and cons in your opinion of allowing chess during software development lull time?

    Read the article

  • Selecting the (right?) technology and environment

    - by Tor
    We are two developers on the edge of starting new web product development. We are both fans of lean start-up approach and would like to practice continuous deployment. Here comes the dilemma - we are both coming from a C# / Windows background and we need to decide between: Stick to .NET and Windows, we will not waste time on learning new technologies and put all our effort in the development. Switch to Ruby on Rails and Linux which has a good reputation of fast ramp up and vast open source support. The negative side is that we will need to put a lot of effort in learning Ruby, Rails and Linux... What would you do? What other considerations should we take?

    Read the article

  • Best development architecture for a small team of programmers ( WAMP Stack )

    - by Tio
    Hi all.. I'm in the first month of work in a new company.. and after I met the two programmer's and asked how things are organized in terms of projects inside the company, they simply shrug their shoulders, and said that nothing is organized.. I think my jaw hit the ground that same time.. ( I know some, of you think I should quit, but I'm on a privileged position, I'm the most experienced there, so there's room for me to grow inside the company, and I'm taking the high road ).. So I talked to the IT guy, and one of the programmers, and maybe this week I'm going to get a server all to myself to start organizing things. I've used various architectures in my previous work experiences, on one I was developing in a server on the network ( no source control of course ).. another experience I had was developing in my local computer, with no server on the network, just source control. And at home, I have a mix of the two, everything I code is on a server on the network, and I have those folders under source control, and I also have a no-ip account configured on that server so I can access it everywhere and I can show the clients anything. For me I think this last solution ( the one I have at home ) is the best: Network server with WAMP stack. The server as a public IP so we can access it by domain name. And use subdomains for each project. Everybody works directly on the network server. I think the problem arises, when two or more people want to work on the same project, in this case the only way to do this is by using source control and local repositories, this is great, but I think this turns development a lot more complicated. In the example I gave, to make a change to the code, I would simply need to open the file in my favorite editor, make the change, alter the database, check in the changes into source control and presto all done. Using local repositories, I would have to get the latest version, run the scripts on the local database to update it, alter the file, alter the database, check in the changes to the network server, update the database on the network server, see if everything is running well on the network server, and presto all done, to me this seems overcomplicated for a change on a simple php page. I could share the database for the local development and for the network server, that sure would help. Maybe the best way to do this is just simply: Network server with WAMP stack ( test server so to speak ), public server accessible trough the web. LAMP stack on every developer computer ( minus the database ) We develop locally, test, then check in the changes into the server test and presto. What do you think? Maybe I should start doing this at home.. Thanks and best regards... Edit: I'm sorry I made a mistake and switched WAMP with LAMP, sorry about that..

    Read the article

  • Running 32-bit SSIS in a 64-bit Environment

    - by John Paul Cook
    After my recent post on where to find the 32-bit ODBC Administrator on a 64-bit SQL Server, a new question was asked about how to get SSIS to run with the 32-bit ODBC instead of the 64-bit ODBC. You need to make a simple configuration change to the properties of your BIDS solution. Here I have a solution called 32bitODBC and it needs to run in 32-bit mode, not 64-bit mode. Since I have a 64-bit SQL Server, BIDS defaults to using the 64-bit runtime. To override this setting, go to the property pages...(read more)

    Read the article

  • Join the Dark Side of Visual Studio 2010

    - by InfinitiesLoop
    Hard to believe it’s been so long, but it was almost 4 years ago when I published Join the Dark Side of Visual Studio . That was when a lot of people were still using VS2003, and importing and exporting environment settings required a custom add-in, VSStyler, which has since fallen off the planet and is hard to find (link, anyone? Let me know). Three versions of VS later, and I’m still using and loving the dark side. Pleased, I am (haha). In fact, that article for one reason or another is still one...(read more)

    Read the article

  • Should companies require developers to credit code they didn't write?

    - by sunpech
    In academia, it's considered cheating if a student copies code/work from someone/somewhere else without giving credit, and tries to pass it off as his/her own. Should companies make it a requirement for developers to properly credit all non-trivial code and work that they did not produce themselves? Is it useful to do so, or is it simply overkill? I understand there are various free licenses out there, but if I find stuff I like and actually use, I really feel compelled to give credit via comment in code even if it's not required by the license (or lack thereof one).

    Read the article

  • Enterprise MDM: Rationalizing Reference Data in a Fast Changing Environment

    - by Mala Narasimharajan
    By Rahul Kamath Enterprises must move at a rapid pace to establish and retain global market leadership by continuously focusing on operational efficiency, customer intimacy and relentless execution. Reference Data Management    As multi-national companies with a presence in multiple industry categories, market segments, and geographies, their ability to proactively manage changes and harness them to align their front office with back-office operations and performance management initiatives is critical to make the proverbial elephant dance. Managing reference data including types and codes, business taxonomies, complex relationships as well as mappings represent a key component of the broader agenda for enabling flexibility and agility, without sacrificing enterprise-level consistency, regulatory compliance and control. Financial Transformation  Periodically, companies find that processes implemented a decade or more ago no longer mirror the way of doing business and seek to proactively transform how they operate their business and underlying processes. Financial transformation often begins with the redesign of one’s chart of accounts. The ability to model and redesign one’s chart of accounts collaboratively, quickly validate against historical transaction bases and secure business buy-in across multiple line of business stakeholders, while continuing to manage changes within the legacy general ledger systems and downstream analytical applications while piloting the in-flight transformation can mean the difference between controlled success and project failure. Attend the session titled CON8275 - Oracle Hyperion Data Relationship Management: Enabling Enterprise Transformation at Oracle Openworld on Monday, October 1, 2012 at 4:45pm in Ballroom A of the InterContinental Hotel to learn how Oracle’s Data Relationship Management solution can help you stay ahead of the competition and proactively harness master (and reference) data changes to transform your enterprise. Hear in-depth customer testimonials from GE Healthcare and Old Mutual South Africa to learn how others have harnessed this technology effectively to build enduring competitive advantage through business process innovation and investments in master data governance. Hear GE Healthcare discuss how DRM has enabled financial transformation, ERP consolidation, mergers and acquisitions, and the alignment reference data across financial and management reporting applications. Also, learn how Old Mutual SA has upgraded to EBS R12 Financials and is transforming the management of chart of accounts for corporate reporting. Separately, an esteemed panel of DRM customers including Cisco Systems, Nationwide Insurance, Ralcorp Holdings and Mentor Graphics will discuss their perspectives on how DRM has helped them address business challenges associated with enterprise MDM including major change management initiatives including financial transformations, corporate restructuring, mergers & acquisitions, and the rationalization of financial and analytical master reference data to support alternate business perspectives for the alignment of EPM/BI initiatives. Attend the session titled CON9377 - Customer Showcase: Success with Oracle Hyperion Data Relationship Management at Openworld on Thursday, October 4, 2012 at 12:45pm in Ballroom of the InterContinental Hotel to interact with our esteemed speakers first hand.

    Read the article

  • How important is to sacriface your free time for accomplishing goals? [closed]

    - by Darf Zon
    I was reading a book about XP programming and about agile teams. While I was reading, I saw this scenario. I've never worked with a development team (just in school). So I would like what do you opine on this situation: Your boss has asked you to deliver software in a time that can only be possible to meet the project team asking if you want to work overtime without pay. All team members have young children. Discuss whether it should accept this request from your boss or should persuade the team to give their time to the organization rather than their families. What could be significant factors in the decision? As a programmer, you are offered an upgrade as project manager, but his feeling is that you can have a more effective contribution in a technical role in one administrative. Write when you should accept that promotion. Somethimes, I sacrifice my free time for accomplishing hits at work, so it's very important to me to know your opinion base of your experience.

    Read the article

  • LaTeX-like display programming environment

    - by Gage
    I used to be a hobbyist programmer, but now I'm also a fairly experienced physicist and find myself programming to solve certain problems quite a lot. In physics, we use variables with superscripts, subscripts, italics, underlines, etc etc. To bridge this gap to the computer we usually use LaTeX. Now, I generally use MATLAB for handling any data and such, and find it very irritating that I can't basically use LaTeX for variable names. Something as simple as sy has to be named either sigma_y or some descriptive name like peak_height_error. I don't necessarily want full on workable LaTeX in my code, but I do want to be able to use greek letters and super/sub-scripts at the very least. Does this exist?

    Read the article

  • Setting different default applications for different Desktop Environment

    - by Anwar
    I am using Ubuntu 12.04 with default Unity interface. I installed later the KDE desktop, XFCE, LXDE, gnome-shell and Cinnamon. The KDE comes with different default applications than Unity, such as kwrite for text editing, konsole as virtual terminal, kfontview for font viewing and installing, dolphin as File browser etc. Other DE come with some other default applications. The problem arises when you want to open a file such as a text file, with which can both be opened by gedit and kwrite, I want to use kwrite on KDE and gedit on Unity or Gnome. But, there is no way to set like this. I can set default application for text file by changing respective settings in both KDE and Unity, but It become default for both DE. For example, If I set kfontviewer as default font viewing application in KDE, it also opens fonts when I am in Unity or Gnome and vice versa. This is a problem because, loading other DE's program takes long time than the default one for the used DE. My question is: Can I use different default applications for different DE? How?

    Read the article

  • How do I Set PATH in /etc/profile.d?

    - by Rodrigo Sasaki
    I'm using zsh as my shell, and I'm trying to configure my environment. I usually define my $JAVA_HOME variable by creating a file: /etc/profile.d/java.sh with the following content export JAVA_HOME=/path/to/jdk export PATH=$JAVA_HOME/bin:$PATH then I logout and back in, and it all works, but for some reason the PATH variable is not set. It recognizes JAVA_HOME, but not the new PATH, see this terminal snippet: ~ echo $JAVA_HOME /usr/lib/jvm/jdk1.8.0_05 ~ echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games and I confirmed it by trying to run a command form the jvm ~ java -version zsh: command not found: java the PATH doesn't include the $JAVA_HOME as it should. is there something else I should check? EDIT I have checked that if I run source /etc/profile.d/java.sh it all runs correctly and my variables get set as they should, but shouldn't the scripts in /etc/profile.d run automatically?

    Read the article

  • alternative to environment variables

    - by tonyl7126
    The amount of servers and the complexity of our application is growing and we now have servers in different regions (hosted on AWS). Certain database operations require low latency so we have stuck a database in each region (which is basically a user cache) to keep the network latency low. The way the application server currently knows which user cache/database to make its call to depends on an environemnt variable set in it. This has been working fine, but it seems hacky and not optimal. Is there any way for this to be done automatically? I was considering using a package like fping and pinging each database when the app server reloads (or caching it the first time) and using the corresponding latencies to decide which database has the lowest latency for each app server. Not sure if this is the best idea though.

    Read the article

  • Licensing approach for .NET library that might be used desktop / web-service / cloud environment

    - by Bobrovsky
    I am looking for advice how to architect licensing for a .NET library. I am not asking for tool/service recommendations or something like that. My library can be used in a regular desktop application, in an ASP.NET solution. And now Azure services come into play. Currently, for desktop applications the library checks if the application and company names from the version history are the same as the names the key was generated for. In other cases the library compares hardware IDs. Now there are problems: an Azure-enabled web-application can be run on different hardware each time (AFAIK) sometimes the hardware ID for the same hardware changes unexpectedly checking the hardware ID or version info might not be allowed in some circumstances (shared hosting for example) So, I am thinking about what approach I can take to architect a licensing scheme that: is friendly to customers (I do not try to fight piracy, but I do want to warn the customer if he uses the library on more servers than he paid for) can be used when there is no internet connection can be used on shared hosting What would you recommend?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >