Search Results

Search found 5298 results on 212 pages for 'automated deploy'.

Page 175/212 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • Where do you take mocking - immediate dependencies, or do you grow the boundaries...?

    - by Peter Mounce
    So, I'm reasonably new to both unit testing and mocking in C# and .NET; I'm using xUnit.net and Rhino Mocks respectively. I'm a convert, and I'm focussing on writing behaviour specifications, I guess, instead of being purely TDD. Bah, semantics; I want an automated safety net to work above, essentially. A thought struck me though. I get programming against interfaces, and the benefits as far as breaking apart dependencies goes there. Sold. However, in my behaviour verification suite (aka unit tests ;-) ), I'm asserting behaviour one interface at a time. As in, one implementation of an interface at a time, with all of its dependencies mocked out and expectations set up. The approach seems to be that if we verify that a class behaves as it should against its collaborating dependencies, and in turn relies on each of those collaborating dependencies to have signed that same quality contract, we're golden. Seems reasonable enough. Back to the thought, though. Is there any value in semi-integration tests, where a test-fixture is asserting against a unit of concrete implementations that are wired together, and we're testing its internal behaviour against mocked dependencies? I just re-read that and I think I could probably have worded it better. Obviously, there's going to be a certain amount of "well, if it adds value for you, keep doing it", I suppose - but has anyone else thought about doing that, and reaped benefits from it outweighing the costs?

    Read the article

  • Is there an event that raises after a View/PartialView executes in ASP.NET MVC 2 RC2?

    - by sabanito
    I have the following problem: We have an ASP.NET MVC 2 RC 2 application that programmatically impersonates an AD Account that the user specifies at logon. This account is used to access the DB. At first we had the impersonating code in the begin_request and we were undoing the impersonation at the end_request, but when we tried to use IIS 7.5 in integrated mode, we learned that it's not possible to impersonate in the Global.asax so we tried different things. We have succesfully moved our code from the BeginRequest to the ActionExecuting event and the EndRequest to the ResultExecuted, and now, about 80% of our code works. We've just discovered that since we're passing the Entity Framework objects as models for our views, there's this remaining 20% that won't work because some Navigation Properties are not loaded when the view begins it's execution, so we're getting connection exceptions from Sql Server. Is there any event or method that executes AFTER the view, so we can undo the impersonation in it? We thought ResultExecuted will do just that, but it doesn't. We've been told that passing the plain Entities into the view as models is not a good idea, but we have A LOT of views that may have this problem and there's not automated way to know it. If some of you could explain why it's not a good idea, maybe we can convince the team to fix it!

    Read the article

  • Are there any ASP.NET MVC subscription-based starter kits or examples?

    - by Wayne M
    Basically something that handles the low-level "plumbing" code for a subscription-based service. I see a lot of things dealing with basic membership, but nothing that handles the subscription aspect (recurring billing, automated jobs for setting up billing, notification for billing, etc). This might be the one thing that keeps me from using ASP.NET MVC for my SaaS idea, since it would take a fair amount of development time to write my own; if I go with my other option, Ruby on Rails, I can buy a kit that does all of this for $250. I haven't found anything even remotely close to this for .NET - all of the SaaS sample apps I've seen are more like StackOverflow et all where you have one site that multiple people log on to, not the web application model where you have subscribers who are billed monthly, each of whom has users and other entities (e.g. Customers, Tasks, etc) for their own site. Is there anything similar for ASP.NET, or some kind of guidelines for writing my own if I have to, so I don't waste too much time? As a startup that means that I'm doing all the coding myself. I've found this, but it seems to only be for billing and didn't seem to have much (any?) documentation on exactly how to set it up.

    Read the article

  • Does HttpListener work well on Mono?

    - by billpg
    Hi everyone. I'm looking to write a small web service to run on a small Linux box. I prefer to code in C#, so I'm looking to use Mono. I don't want the overhead of running a full web server or Mono's version of ASP.NET. I'm thinking of having a single process with a thread dealing with each client connection. Shared memory between threads instead of a database. I've read a little on Microsoft's version of HttpListener and how it works with the Http.sys driver. Alas, Mono's documentation on this class is just the automated class interface with no discussion of how it works under the hood. (Linux doesn't have Http.sys, so I imagine it's implemented substantially differently.) Could anyone point me towards some resources discussing this module please? Many thanks, Bill, billpg.com (A little background to my question for the interested.) Some time ago, I asked this question, interested in keeping a long conversation open with lots of back-and-forth. I had settled on designing my own ad-hoc protocol, but people I spoke to really wanted a REST interface, even at the cost of the "Okay, send your command now" signal. So, I wondered about running ASP.NET on a Linux/Mono server, but stumbled upon HttpListener. This seemed ideal, as each "conversation" could run in a separate thread. The thread that calls HttpListener in a loop can look for which thread each incomming connection is for and pass the reference to that thread. The alternative for an ASP.NET driven service, would be to have the ASPX code pick up the state from a database, and write back the new state when it finishes. Yes, it would work, but that's a lot of overhead.

    Read the article

  • Field specific errors for ETL

    - by AaronLS
    I am creating a ETL process in MS SQL Server and I would like to have errors specific to a particular column of a particular row. For example, the data is initially loaded from excel files into a table(we'll call the Initial table) where all columns are varchar(2000) and then I stage the data to another table(the DataTypedTable) that contains more specific data types (datetime,int, etc.) or more tightly constrained varchar lengths. I need to be able to create error messages for a specific field such as: "Jan. 13th" is not a valid date format for the submission date. Please use a format of MM/DD/YYYY These error messages would need to be stored in some way such that later in the process a automated process can create reports with the error messages such that each message references a specific row and field(someone will need to go back and correct the data in the source system and resubmit the excel file). So ideally it would be inserted into a Failures tables of some sort and contain the primary key of the failed row, the column name, and the error message. Question: So I am wondering if this can be accomplished with SSIS, or some open source tool like Talend, and if so, what would be your general approach? Or what hand coded approach you would take? Couple approaches I've thought of using SQL(up until no I have done ETL by hand in SQL procs, but I want to consider other approaches. Possible C# even.): Use a cursor to read through the Initial table, and for each row insert a blank record with only the primary key into the DataTyped table, then use a single update statement for each column, such that if that update fails I can insert a very specific error message specific to that column in the error messages table. Insert all the data as is into the DataTyped table, but have duplicate columns like SubmissionDate and SubmissionDateOld. After the initial insert the *Old columns have data, the rest are blank, and I have a single update for each column that sets the SubmissionDate based on the SubmissionDateOld. In addition to suggesting an approach, I'd like to know if you are using that approach or something similar already in the work you do.

    Read the article

  • TFS Solution build cascading to several other builds even when common components were not modified

    - by Bob Palmer
    Hey all, here is the issue I am currently trying to work through. We are using Team Foundation Server 2008, and utilizing the automated build support out of the box. We have one very large project that encompasses a number of interrelated components and web sites, each of which is set up as a Visual Studio solution file. Many of these solutions are highly interrelated since they may contain applications, or contain common libraries or shared components. We have roughly 20 or so applications, three large web sites, and about 20 components. Each solution may include projects from other solutions. For example, a solution for a console app would also include the project files for all of the components it utilizes, since we need to ensure that when someone changes a component and rebuilds it, it is reflected in all of the projects that consume that component, and we can make sure nothing was broken. We have build projects for each solution, whether that's an application, component, or web site. For this example, we will call them solutions 01, 02, and 03. These reference multiple projects (both their own core project and test projects, plus the projects relating to various components). Solution 01 has projects A, B, and C. Solution 02 has projects C, D, and E. Solution 03 has projects E, F, and G. Now, for the problem. If I modify project A, the system will end up rebuilding all three solutions. Worse, all thirty solutions reference common projects used for data access (let's call it project H). Because they all share one project in common, if I modify any solution in my stack, even if it does not touch project H, I still end up kicking off every single build script. Any thoughts on how to address this? Ideally I would only want to kick off builds where their constituant projects were directly modified - i.e in the example below, if I modified project C, I would only rebuild solutions 01 and 02. Thanks!

    Read the article

  • Caching sitemaps in django

    - by michuk
    I implemented a simple sitemap class using django's default sitemap app. As it was taking a long time to execute, I added manual caching: class ShortReviewsSitemap(Sitemap): changefreq = "hourly" priority = 0.7 def items(self): # try to retrieve from cache result = get_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews") if result!=None: return result result = ShortReview.objects.all().order_by("-created_at") # store in cache set_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews", result) return result def lastmod(self, obj): return obj.updated_at The problem is that memcache allows only max 1MB object. This one was bigger that 1MB, so storing into cache failed: >7 SERVER_ERROR object too large for cache The problem is that django has an automated way of deciding when it should divide the sitemap file into smalled ones. According to the docs (http://docs.djangoproject.com/en/dev/ref/contrib/sitemaps/): You should create an index file if one of your sitemaps has more than 50,000 URLs. In this case, Django will automatically paginate the sitemap, and the index will reflect that. What do you think would be the best way to enable caching sitemaps? - Hacking into django sitemaps framework to restrict a single sitemap size to, let's say, 10,000 records seems like the best idea. Why was 50,000 chosen in the first place? Google advice? random number? - Or maybe there is a way to allow memcached store bigger files? - Or perhaps onces saved, the sitemaps should be made available as static files? This would mean that instead of caching with memcached I'd have to manually store the results in the filesystem and retrieve them from there next time when the sitemap is requested (perhaps cleaning the directory daily in a cron job). All those seem very low level and I'm wondering if an obvious solution exists...

    Read the article

  • What's the compelling reason to upgrade to Visual Studio 2010 from VS2008?

    - by Cheeso
    Are there new features in Visual Studio 2010 that are must-haves? If so, which ones? For me, the big draws for VS2008 as compared to VS2005 were LINQ, .NET Framework multitargeting, WCF (REST + Syndication), and general devenv.exe reliability. Granted, some of these features are framework things, and not tool things. For the purposes of this discussion, I'm willing to combine them into one bucket. What is the list of must-have features for VS2010 versus VS2008? Are there any? I am particularly interested in C#. Update: I know how to google, so I can get the official list from Microsoft. I guess what I really wanted was, the assessment from people using it, as to which things are really notable. Microsoft went on for 3 pages about 2008/3.5 features, and many people sort of boiled it down to LINQ, and a few other things. What is that short list for VS2010? Summary so far, what people think is cool or compelling: Visual Studio engine multi-monitor support new extensibility model based on WPF, prettier and more usable new TFS stuff, incl automated test tools parallel debugging .NET Framework parallel extensions for .NET C# 4.0 generic variance optional and named params easier interop with non-managed environments, like COM or Javascript VB 10.0 collection and array literals / initializers automatic properties anonymous methods / statement lambdas I read up on these at Zander's blog. He described these and other features. Nobody on this list said anything about: Visual Studio engine F# support Javascript code-completion JQuery is now included UML better Sharepoint capabilities C++ moves to msbuild project files

    Read the article

  • Alternative to MS Project 2007 for production scheduling?

    - by john c
    OK... Im coming to grips with the fact that MS Project 2007 may not be the correct tool for my production scheduling. We serve 120 to 150 projects a year with durations from 6 weeks to 12 months. The task are simple (6 to 8 per project) and the resource pool is stable (15 to 20 people). It's really an assembly line product but with extremely varied durations. I need to be able to prioritize the projects for production and run projects concurrently to fully utilize my resources. What are the feelings of the stackoverflow community. Am I using the wrong program? I was really hoping to make this simple for non-programer types to input project data into a form and have the schedule software automated enough to make most of the decisions. Is there a better solution available commercially? I'd like to hold on writing a custom spreadsheet as a last resort but if that's the best route then so be it. Thank you so much for your input.

    Read the article

  • Packaging reference documentation with jar file

    - by soren.enemaerke
    We are porting our .NET library to a java equivalent and is now looking at how to distribute this port. Packaging the classes into a jar-file seems like best practice and we would then ship this jar file in a zip along with some license terms. But what about the documentation? In .NET land it seems like best practice to distribute the xml file that can be consumed by tooling (Visual Studio) but we can't seem to find such best practices for java. We have javadoc comments on our public classes and interfaces, so we are just looking for a way to generate and distribute these comments in a way that is developer friendly (we're thinking easily consumed from various IDEs). What are developers expecting and how do you best deliver this? We would really prefer to bundle the documentation along with the jar file and not have to host the documentation on our website EDIT: We would like for our documentation to appear inside the java IDEs so we want to provide the documentation in a way that integrates into the IDEs as gracefully as possible. In .NET land this is as an xml file placed next to the .dll file, but is there a similar concept for jar files that enables the integration into tooling? PS: We are developing in Eclipse and have an ant task doing the building and jar-file packaing in our automated build.

    Read the article

  • What's the Matlab equivalent of NULL, when it's calling COM/ActiveX methods?

    - by David M
    Hi, I maintain a program which can be automated via COM. Generally customers use VBS to do their scripting, but we have a couple of customers who use Matlab's ActiveX support and are having trouble calling COM object methods with a NULL parameter. They've asked how they do this in Matlab - and I've been scouring Mathworks' COM/ActiveX documentation for a day or so now and can't figure it out. Their example code might look something like this: function do_something() OurAppInstance = actxserver('Foo.Application'); OurAppInstance.Method('Hello', NULL) end where NULL is where in another language, we'd write NULL or nil or Nothing, or, of course, pass in an object. The problem is this is optional (and these are implemented as optional parameters in most, but not all, cases) - these methods expect to get NULL quite often. They tell me they've tried [] (which from my reading seemed the most likely) as well as '', Nothing, 'Nothing', None, Null, and 0. I have no idea how many of those are even valid Matlab keywords - certainly none work in this case. Can anyone help? What's Matlab's syntax for a null pointer / object for use as a COM method parameter? Update: Thanks for all the replies so far! Unfortunately, none of the answers seem to work, not even libpointer. The error is the same in all cases: Error: Type mismatch, argument 2 This parameter in the COM type library is described in RIDL as: HRESULT _stdcall OurMethod([in] BSTR strParamOne, [in, optional] OurCoClass* oParamTwo, [out, retval] VARIANT_BOOL* bResult); The coclass in question implements a single interface descending from IDispatch.

    Read the article

  • How to document an accessor/mutator method in phpDoc/javaDoc?

    - by nickf
    Given a function which behaves as either a mutator or accessor depending on the arguments passed to it, like this: // in PHP, you can pass any number of arguments to a function... function cache($cacheName) { $arguments = func_get_args(); if (count($arguments) >= 2) { // two arguments passed. MUTATOR. $value = $arguments[1]; // use the second as the value $this->cache[$cacheName] = $value; // *change* the stored value } else { // 1 argument passed, ACCESSOR return $this->cache[$cacheName]; // *get* the stored value } } cache('foo', 'bar'); // nothing returned cache('foo') // 'bar' returned How do you document this in PHPDoc or a similar automated documentation creator? I had originally just written it like this: /** * Blah blah blah, you can use this as both a mutator and an accessor: * As an accessor: * @param $cacheName name of the variable to GET * @return string the value... * * As a mutator: * @param $cacheName name of the variable to SET * @param $value the value to set * @return void */ However, when this is run through phpDoc, it complains because there are 2 return tags, and the first @param $cacheName description is overwritten by the second. Is there a way around this?

    Read the article

  • latin1/unicode conversion problem with ajax request and special characters

    - by mfn
    Server is PHP5 and HTML charset is latin1 (iso-8859-1). With regular form POST requests, there's no problem with "special" characters like the em dash (–) for example. Although I don't know for sure, it works. Probably because there exists a representable character for the browser at char code 150 (which is what I see in PHP on the server for a literal em dash with ord). Now our application also provides some kind of preview mechanism via ajax: the text is sent to the server and a complete HTML for a preview is sent back. However, the ordinary char code 150 em dash character when sent via ajax (tested with GET and POST) mutates into something more: %E2%80%93. I see this already in the apache log. According to various sources I found, e.g. http://www.tachyonsoft.com/uc0020.htm , this is the UTF8 byte representation of em dash and my current knowledge is that JavaScript handles everything in Unicode. However within my app, I need everything in latin1. Simply said: just like a regular POST request would have given me that em dash as char code 150, I would need that for the translated UTF8 representation too. That's were I'm failing, because with PHP on the server when I try to decode it with either utf8_decode(...) or iconv('UTF-8', 'iso-8859-1', ...) but in both cases I get a regular ? representing this character (and iconv also throws me a notice: Detected an illegal character in input string ). My goal is to find an automated solution, but maybe I'm trying to be überclever in this case? I've found other people simply doing manual replacing with a predefined input/output set; but that would always give me the feeling I could loose characters. The observant reader will note that I'm behind on understanding the full impact/complexity with things about Unicode and conversion of chars and I definitely prefer to understand the thing as a whole then a simply manual mapping. thanks

    Read the article

  • How can I convert a bunch of files from ISO-8859-1 to UTF-8 using Perl?

    - by tau
    I have several documents I need to convert from ISO-8859-1 to UTF-8 (without the BOM of course). This is the issue though. I have so many of these documents (it is actually a mix of documents, some UTF-8 and some ISO-8859-1) that I need an automated way of converting them. Unfortunately I only have ActivePerl installed and don't know much about encoding in that language. I may be able to install PHP, but I am not sure as this is not my personal computer. Just so you know, I use Scite or Notepad++, but both do not convert correctly. For example, if I open a document in Czech that contains the character "ž" and go to the "Convert to UTF-8" option in Notepad++, it incorrectly converts it to an unreadable character. There is a way I CAN convert them, but it is tedious. If I open the document with the special characters and copy the document to Windows clipboard, then paste it into a UTF-8 document and save it, it is okay. This is too tedious (opening every file and copying/pasting into a new document) for the amount of documents I have. Any ideas? Thanks!!!

    Read the article

  • Copy SQL Server data from one server to another on a schedule

    - by rwmnau
    I have a pair of SQL Servers at different webhosts, and I'm looking for a way to periodically update the one server using the other. Here's what I'm looking for: As automated as possible - ideally, without any involvement on my part once it's set up. Pushes a number of databases, in their entirely (including any schema changes) from one server to the other Freely allows changes on the source server without breaking my process. For this reason, I don't want to use replication, as I'd have to break it every time there's an update on the source, and then recreate the publication and subscription One database is about 4GB in size and contains binary data. I'm not sure if there's a way to export this to a script, but it would be a mammoth file if I did. Originally, I was thinking of writing something that takes a scheduled full backup of each database, FTPs the backups from one server to the other once they're done, and then the new server picks it up and restores it. The only downside I can see to this is that there's no way to know that the backups are done before starting to transfer them - can these backups be done synchronously? Also, the server being refreshes is our test server, so if there's some downtime involved in moving the data, that's fine. Does anybody out there have a better idea, or is what I'm currently considering the best non-replication way to go? Thanks for your help, everybody. UPDATE: I ended up designing a custom solution to get this done using BAT files, 7Zip,command line FTP, and OSQL, so it runs in a completely automatic way and aggregates the data from a dozen servers across the country. I've detailed the steps in a blog entry. Thanks for all your input!

    Read the article

  • Allowing Google to bypass CAPTCHA verification - sensible or not?

    - by edanfalls
    My web site has a database lookup; filling out a CAPTCHA gives you 5 minutes of lookup time. There is also some custom code to detect any automated scripts. I do this as I don't want someone data mining my site. The problem is that Google does not see the lookup results when it crawls my site. If someone is searching for a string that is present in the result of a lookup, I would like them to find this page by Googling it. The obvious solution to me is to use the PHP variable $_SERVER['HTTP_USER_AGENT'] to bypass the CAPTCHA and custom security code for the Google bots. My question is whether this is sensible or not. People could then use Google's cache to view the lookup results without having to fill out the CAPTCHA, but would Google's own script detection methods prevent them from data mining these pages? Or would there be some way for people to make $_SERVER['HTTP_USER_AGENT'] appear as Google to bypass the security measures? Thanks in advance.

    Read the article

  • Best way to test a Delphi application

    - by Osama ALASSIRY
    I have a Delphi application that has many dependencies, and it would be difficult to refactor it to use DUnit (it's huge), so I was thinking about using something like AutomatedQA's TestComplete to do the testing from the front-end UI. My main problem is that a bugfix or new feature sometimes breaks old code that was previously tested (manually), and used to work. I have setup the application to use command-line switches to open-up a specific form that could be tested, and I can create a set of values and clicks needed to be done. But I have a few questions before I do anything drastic... (and before purchasing anything) Is it worth it? Would this be a good way to test? The result of the test should in my database (Oracle), is there an easy way in testcomplete to check these values (multiple fields in multiple tables)? I would need to setup a test database to do all the automated testing, would there be an easy way to automate re-setting the test db? Other than drop user cascade, create user,..., impdp. Is there a way in testcomplete to specify command-line parameters for an exe? Does anybody have any similar experiences.

    Read the article

  • Winsock tcp/ip Socket listening but connection refused, race condition?

    - by Wayne
    Hello folks. This involves two automated unit tests which each start up a tcp/ip server that creates a non-blocking socket then bind()s and listen()s in a loop on select() for a client that connects and downloads some data. The catch is that they work perfectly when run separately but when run as a test suite, the second test client will fail to connect with WSACONNREFUSED... UNLESS there is a Thread.Sleep() of several seconds between them??!!! Interestingly, there is retry loop every 1 second for connecting after any failure. So the second test loops for a while until timeout after 10 minutes. During that time, netstat -na shows the correct port number is in the LISTEN state for the server socket. So if it is in the listen state? Why won't it accept the connection? In the code, there are log messages that show the select NEVER even gets a socket ready to read (which means ready to accept a connection when it applies to a listening socket). Obviously the problem must be related to some race condition between finishing one test which means close() and shutdown() on each end of the socket, and the start up of the next. This wouldn't be so bad if the retry logic allowed it to connect eventually after a couple of seconds. However it seems to get "gummed up" and won't even retry. However, for some strange reason the listening socket SAYS it's in the LISTEN state even through keeps refusing connections. So that means it's the Windoze O/S which is actually catching the SYN packet and returning a RST packet (which means "Connection Refused"). The only other time I ever saw this error was when the code had a problem that caused hundreds of sockets to get stuck in TIME_WAIT state. But that's not the case here. netstat shows only about a dozen sockets with only 1 or 2 in TIME_WAIT at any given moment. Please help.

    Read the article

  • super light software development process

    - by Walty
    hi, For the development process I have involved so far, most have teams of SINGLE member, or occasionally two. We used python + django for the major development, the development process is actually very fast, and we do have code reviews, design pattern discussions, and constant refactoring. Though team size is small, I do think there are some development processes / best practices that could be enforced. For example, using svn would be definitely better than regular copy backup. I did read some articles & books about Agile, XP & continuous integration, I think they are nice, but still too heavy for this case (team of 1 or 2, and fast coding). For example, IMHO, with nice design pattern, and iterative development + refactoring, the TDD MIGHT be an overkill, or at least the overhead does not out-weight the advantages. And so is the pair programming. The automated testing is a nice idea, but it seems not technically feasible for every project. our current practices are: svn + milestone + code review I wonder if there are development processes / best practices specifically targeted on such super light teams? thanks.

    Read the article

  • Customers angry, fighting unknown DLL dependencies

    - by wheaties
    I'm a one man show developing a C++ Windows application for a customer. Over the past several months we've been running to the same problems with missing DLL dependencies on customer machines. Despite my best efforts something keeps going wrong and we get angry emails back. My boss and my boss's boss are angry with me and the customers aren't happy. I'm hoping you guys can help out and give suggestions/ideas on how to get the deliverables in order. Before some of the obvious: I have no test machine. That is, I can't replicate the customer environment nor attempt to install the app on a "clean" system to catch gotchas before shipping. I've tried using depends.exe to track down what versions of the DLLs my project is dependent upon. I'm shipping our code with the redistributables I've been able to find that way. After that it's an angry customer email waiting game. I'm required to use a third-party DLL which can not be registered (it's buggy as hell.) I'm not supposed to use Install Shield, any other automated installer, or write an install script. I provide written instructions on how to get the app installed (unzip, double click exe file.) I'm tired of taking heat for this stuff. What am I missing that I could be doing? What should I ask in terms of support from my employer? How should I ask for that support in a way that they'll provide it?

    Read the article

  • how could application installations/configurations be easier in linux? [closed]

    - by ajsie
    although you can do anything in linux it tends to require a lot of tweaking in config files and reading a lot of manuals/tutorials before you can have it running in your way. i know that it gets a lot easier by time, and the apt-get installations with ubuntu/debian is heading the right way. but how can linux be more userfriendly for us in the future? i thought that if more is automated like an IDE environment, eg. typing svn will give us all the commands and description about each command when you move between commands with your keyboard. that would be great. but that's just one example. another is the navigation in the terminal between folders. now you have to type a lot just to jump from/to different folders. would be great with some more automatization here too. i know that these extra features will slow down the server, but its 2010 now, and these features are not that heavy for the cpu, but makes it more userfriendly and encourage maintainance of a server, not frighten u off. what do you think about this? should/could we have more user friendly linux environment in servers, something that has annoyed you a lot? a lot of things are done in the unix way, but maybe we should reinvent the wheel in some areas, cause apparently, its so...repeatingly today and difficult to do easy tasks. it should be easier i think..

    Read the article

  • iphone: cross platform references and referencing external framework resources

    - by dan
    hi there working on an iphone app and separate framework. the separate framework is for an API that i'm building for use in multiple future apps. this api now needs to reference resources (images). what i would like to do is keep the resources WITH the API framework as local set of resources. i followed the instructions from http://www.clintharris.net/2009/iphone-app-shared-libraries/ to setup my app's project to use the headers from the separate API framework. what i can't seem to figure out is how to automatically load the framework's resources into the app's xcode environment so they can be linked in at app compile time. sure, i can drag the resources across from the framework into the main app's set of resources. but that seems kinda ugly and another step that possibly can be automated (??) anyone know of a better way? it would be great if any changes from the framework would be automatically available in the main app (due to the project 'link-age'). thanks for any help/tips/suggestions...

    Read the article

  • CM and Agile validation process of merging to the Trunk?

    - by LoneCM
    Hello All, We are a new Agile shop and we are encountering an issue that I hope others have seen. In our process, the Trunk is considered an integration branch; it does not have to be releasable, but it does have to be stable and functional for others to branch off of. We create Feature branches of the Trunk for new development. All work and testing occurs in these branches. An individual branch pulls up as needed to stay integrated with the Trunk as other features that are accepted and are committed. But now we have numerous feature branches. Each are focused, have a short life cycle, and are pushed to the trunk as they are completed, so we not debating the need for the branches and trying very much to be Agile. My issue comes in here: I require that the branches pull up from the Trunk at the end of their life cycle and complete the validation, regression testing and handle all configuration issues before pushing to the trunk. Once reintegrated into the Trunk, I ask for at least a build and an automated smoke test. However, I am now getting push back on the Trunk validation. The argument is that the developers can merge the code and not need the QA validation steps because they already complete the work in the feature branch. Therefore, the extra testing is not needed. I have attempted to remind management of the numerous times "brainless" merges have failed. Thier solution is to instead of build and regression testing to have the developer diff the Feature branch and the newly merged Trunk. That process in thier mind would replace the regression testing I asked for. So what do you require when you reintegrate back to the Trunk? What are the issues that we will encounter if we remove this step and replace with the diff? Is the cost of staying Agile the additional work of the intergration of the branches? Thanks for any input. LoneCM

    Read the article

  • Are there solutions for streamlining the update of legacy code in multiple places?

    - by ccomet
    I'm working in some old code which was originally designed for handling two different kinds of files. I was recently tasked with adding a new kind of file to this code. Most of my problems were solved by filling out an extensive XML file with a new entry that handled everything from what lists were named to how the file is written in plural lower case. But this ended up being insufficient, as there were maybe 50 different places in 24 different code files where I had to update hardcoded switch-statements that only branched for the original two file types. Unfortunately there is no consistency in this; there are methods which operate half from the XML file, and half off of hardcode. Some of the files which look like they would operate off of the XML file don't, and some that I would expect that I'd need to update the hardcode don't need it. So the only way to find the majority of these is to run through testing the whole system when only part of it is operational, finding that one step to fix (when I'm lucky that error logging actually tells me what is going on), and then running the whole thing again. This wastes time testing the parts of the code which are already confirmed to work, time better spent testing the new parts I have to add on top of it all. It's a hassle and a half, and to my luck I can expect that I will have to add yet another new kind of file in the near future. Are there any solutions out there which can aid in this kind of endeavour? Something which I can input some parameters of current features, document what points in a whole code project actually need to be updated, and run something nice the next time I need to add a new feature to the code. It needn't even be fully automated, something that'll help me navigate straight to the specific points in everything and maybe even record what kind of parameters need to be loaded. Doubt it matters specifically, but the code is comprised of ASP.NET pages, some ASP.NET controls, hundreds of C# code files, and a handful of additional XML files. It's all currently in a couple big Visual Studio 2008 projects.

    Read the article

  • Best practice - logging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime via automated process etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >