Search Results

Search found 9124 results on 365 pages for 'big sal'.

Page 299/365 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • Running code/script as a result of a form submission in ASP.NET

    - by firmbeliever
    An outside vendor did some html work for us, and I'm filling in the actual functionality. I have an issue that I need help with. He created a simple html page that is opened as a modal pop-up. It contains a form with a few input fields and a submit button. On submitting, an email should be sent using info from the input fields. I turned his simple html page into a simple aspx page, added runat=server to the form, and added the c# code inside script tags to create and send the email. It technically works but has a big issue. After the information is submitted and the email is sent, the page (which is supposed to just be a modal pop-up type thing) gets reloaded, but it is now no longer a pop-up. It's reloaded as a standalone page. So I'm trying to find out if there is a way to get the form to just execute those few lines of c# code on submission without reloading the form. I'm somewhat aware of cgi scripts, but from what I've read, that can be buggy with IIS and all. Plus I'd like to think I could get these few lines of code to run without creating a separate executable. Any help is greatly appreciated.

    Read the article

  • What are CAD apps written in, and how are they organized ?

    - by ldigas
    What are CAD applications (Rhino, Autocad) of today written in and how are they organized internally ? I gave as an example, Autocad and Rhino, although I would love to hear of other examples as well. I'm particularly interested in knowing what is their backend written in (multilanguage ?) and how is it organized, and how do they handle their frontend (GUI) in real time ? Do they use native windows API's or some libraries of their own, since I imagine, as good as may be, the open source solutions on today's market won't cut it. I may be wrong ... As most of you who have used them know, they handle amongs other things relatively complex rotational operations in realtime (shading is not interesting me). I've been doing some experiments with several packages recently, and for some larger models found that there is considerable difference in speed in, for example, programed rotation (big full ship models) amongst some of them (which I won't name). So I'm wondering about their internals ... Also, if someone knows of some book on the subject, I'd be interested to hear of it.

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • How I May Have Taken A Wrong Path in Programming

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • What's best choice career-wise, to know a little about a lot or a lot about a little?

    - by nimo
    I work as a developer at a rather small company and we are providing a web application that is used by a big base of customers. Because we are so small everyone have to be able to do a lot of different tasks. It ranges from advanced support, developing the product (programming: c/c++, c#, php, sql, javascript, html, css), handle network configuration and network related issues and even sometimes go on sales meetings with potential customers. My concern is that I don't really specialize in any specific area. I know and learn little about a lot. I have graduated from school two years ago and this is my first real employment and when I look at other positions out there they always require so and so many years of experience in a specific area (for example 5 years of C#). For me to get that kind of specialized experience will be really hard at my current job. My question for you is what is, in your opinion, best choice career-wise, to know a little about a lot or a lot about a little? What path did you take? pros and cons that comes with that choice.

    Read the article

  • Does F# documentation have a way to search for functions by their types?

    - by Nathan Sanders
    Say I want to know if F# has a library function of type ('T -> bool) -> 'T list -> int ie, something that counts how many items of a list that a function returns true for. (or returns the index of the first item that returns true) I used to use the big list at the MSR site for F# before the documentation on MSDN was ready. I could just search the page for the above text because the types were listed. But now the MSDN documentation only lists types on the individual pages--the module page is a mush of descriptive text. Google kinda-sorta works, but it can't help with // compatible interfaces ('T -> bool) -> Seq<'T> -> int // argument-swaps Seq<'T> -> ('T -> bool) -> int // type-variable names ('a -> bool) -> Seq<'a> -> int // wrappers ('a -> bool) -> 'a list -> option<int> // uncurried versions ('T -> bool) * 'T list -> int // .NET generic syntax ('T -> bool) -> List<'T> -> int // methods List<'T> member : ('T -> bool) -> int Haskell has a standalone program for this called Hoogle. Does F# have an equivalent, like Fing or something?

    Read the article

  • Splitting 25mb .txt file into smaller files using text delimiter

    - by user574141
    Regards, SO I am new to python and Perl. I have been trying to solve a simple problem and getting tied in knots with syntax. I hope someone has the time and patience to help. I have a 25mb file in ".txt" format which contains news-wire articles going back to 1970. Each news story is concatenated to the next, with only the "Copyright" statement to delimit. Each news story starts with "Item XX of XXX DOCUMENTS". There are certain metadata that are repeated throughout, I will use these for tagging later on. I wish to split this 25mb file into separate .txt files, each containing one news story (i.e. the text between "DOCUMENTS" and "Copyright", saving each with a different name (obviously). I am trying to 1 ) open the file... 2) iterate over lines in the file checking for the eof delimiter, and if it is not present writing the line to a list 3)write that list to a seperate small file. I'm having big problems with changing filenames using the counter, and how do I make Python start from where I left off, is the "seek" function appropriate? so far I have been trying this approach, completely unsuccessfully: myfile = open ("myfile.txt", 'r') filenumber = 0 for line in myfile.readline(): filenumber += 1 w=0 while myfile.readline() != '\s+DOCUMENTS\s*\n' ### read my line into a list mysmallfile()['w'] = [myfile.readline()] w += 1 output = open('C:\\Users\\dunner7\\Documents\###how do I change the filename each iteration???', 'w') output.writelines(mysmallfile) ###go back to start. Thank you for your time and patience. RD

    Read the article

  • Weird behavior of matching array keys after json_decode()

    - by arnorhs
    I've got some very weird behavior in my PHP code. I don't know if this is actually a good SO question, since it almost looks like a bug in PHP. I had this problem in a project of mine and isolated the problem: // json object that will be converted into an array $json = '{"5":"88"}'; $jsonvar = (array) json_decode($json); // notice: Casting to an array // Displaying the array: var_dump($jsonvar); // Testing if the key is there var_dump(isset($jsonvar["5"])); var_dump(isset($jsonvar[5])); That code outputs the following: array(1) { ["5"]=> string(2) "88" } bool(false) bool(false) The big problem: Both of those tests should produce bool(true) - if you create the same array using regular php arrays, this is what you'll see: // Let's create a similar PHP array in a regular manner: $phparr = array("5" => "88"); // Displaying the array: var_dump($phparr); // Testing if the key is there var_dump(isset($phparr["5"])); var_dump(isset($phparr[5])); The output of that: array(1) { [5]=> string(2) "88" } bool(true) bool(true) So this doesn't really make sense. I've tested this on two different installations of PHP/apache. You can copy-paste the code to a php file yourself to test it. It must have something to do with the casting from an object to an array.

    Read the article

  • Design an Application That Stores and Processes Files

    - by phasetwenty
    I'm tasked with writing an application that acts as a central storage point for files (usually document formats) as provided by other applications. It also needs to take commands like "file 395 needs a copy in X format", at which point some work is offloaded to a 3rd party application. I'm having trouble coming up with a strategy for this. I'd like to keep the design as simple as possible, so I'd like to avoid big extra frameworks or techniques like threads for as long as it makes sense. The clients are expected to be web applications (for example, one is a django application that receives files from our customers; the others are not yet implemented). The platform it will be running on is likely going to be Python on Linux, unless I have a strong argument to use something else. In the beginning I thought I could fit the information I wanted to communicate in the filenames, and let my application parse the filename to figure out what it needed to do, but this is proving too inflexible with the amount of information I'm realizing I need to make available. Another idea is to pair FTP with a database used as a communication medium (client uploads a file and updates the database with a command as a row in a table) but I don't like this idea because adding commands (a known change) looks like it will require adding code as well as changing database schemas. It will also muddy up the interface my clients will have to use. I looked into Pyro to let applications communicate more directly but I don't like the idea of running an extra nameserver for this one purpose. I also don't see a good way to do file transfer within this framework. What I'm looking for is techniques and/or technologies applicable to my problem. At the simplest level, I need the ability to accept files and messages with them.

    Read the article

  • How to Alphabetize a CSS file in Vim

    - by Kev
    I get a CSS file: div#header h1 { z-index: 101; color: #000; position: relative; line-height: 24px; margin-right: 48px; border-bottom: 1px solid #dedede; font-size: 18px; } div#header h2 { z-index: 101; color: #000; position: relative; line-height: 24px; margin-right: 48px; border-bottom: 1px solid #dedede; font-size: 18px; } I want to Alphabetize lines between the {...} div#header h1 { border-bottom: 1px solid #dedede; color: #000; font-size: 18px; line-height: 24px; margin-right: 48px; position: relative; z-index: 101; } div#header h2 { border-bottom: 1px solid #dedede; color: #000; font-size: 18px; line-height: 24px; margin-right: 48px; position: relative; z-index: 101; } I map F7 to do it nmap <F7> /{/+1<CR>vi{:sort<CR> But I need to press F7 over and over again to get the work done. If the CSS file is big, It's time-consuming & easily get bored. I want to get the cmds piped. So that, I only press F7 once! Any idea? thanks!

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

  • Building static (but complicated) lookup table using templates.

    - by MarkD
    I am currently in the process of optimizing a numerical analysis code. Within the code, there is a 200x150 element lookup table (currently a static std::vector < std::vector < double ) that is constructed at the beginning of every run. The construction of the lookup table is actually quite complex- the values in the lookup table are constructed using an iterative secant method on a complicated set of equations. Currently, for a simulation, the construction of the lookup table is 20% of the run time (run times are on the order of 25 second, lookup table construction takes 5 seconds). While 5-seconds might not seem to be a lot, when running our MC simulations, where we are running 50k+ simulations, it suddenly becomes a big chunk of time. Along with some other ideas, one thing that has been floated- can we construct this lookup table using templates at compile time? The table itself never changes. Hard-coding a large array isn't a maintainable solution (the equations that go into generating the table are constantly being tweaked), but it seems that if the table can be generated at compile time, it would give us the best of both worlds (easily maintainable, no overhead during runtime). So, I propose the following (much simplified) scenario. Lets say you wanted to generate a static array (use whatever container suits you best- 2D c array, vector of vectors, etc..) at compile time. You have a function defined- double f(int row, int col); where the return value is the entry in the table, row is the lookup table row, and col is the lookup table column. Is it possible to generate this static array at compile time using templates, and how?

    Read the article

  • Are there good reasons not to use an ORM?

    - by hangy
    During my apprenticeship, I have used NHibernate for some smaller projects which I mostly coded and designed on my own. Now, before starting some bigger project, the discussion arose how to design data access and whether or not to use an ORM layer. As I am still in my apprenticeship and still consider myself a beginner in enterprise programming, I did not really try to push in my opinion, which is that using an object relational mapper to the database can ease development quite a lot. The other coders in the development team are much more experienced than me, so I think I will just do what they say. :-) However, I do not completely understand two of the main reasons for not using NHibernate or a similar project: One can just build one’s own data access objects with SQL queries and copy those queries out of Microsoft SQL Server Management Studio. Debugging an ORM can be hard. So, of course I could just build my data access layer with a lot of SELECTs etc, but here I miss the advantage of automatic joins, lazy-loading proxy classes and a lower maintenance effort if a table gets a new column or a column gets renamed. (Updating numerous SELECT, INSERT and UPDATE queries vs. updating the mapping config and possibly refactoring the business classes and DTOs.) Also, using NHibernate you can run into unforeseen problems if you do not know the framework very well. That could be, for example, trusting the Table.hbm.xml where you set a string’s length to be automatically validated. However, I can also imagine similar bugs in a “simple” SqlConnection query based data access layer. Finally, are those arguments mentioned above really a good reason not to utilise an ORM for a non-trivial database based enterprise application? Are there probably other arguments they/I might have missed? (I should probably add that I think this is like the first “big” .NET/C# based application which will require teamwork. Good practices, which are seen as pretty normal on Stack Overflow, such as unit testing or continuous integration, are non-existing here up to now.)

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • Is there a table of OpenGL extensions, versions, and hardware support somewhere?

    - by Thomas
    I'm looking for some resource that can help me decide what OpenGL version my game needs at minimum, and what features to support through extensions. Ideally, a table of the following format: 1.0 1.1 1.2 1.2.1 1.3 ... multitexture - ARB ARB core core texture_float - EXT EXT ARB ARB ... (Not sure about the values I put in, but you get the idea.) The extension specs themselves, at opengl.org, list the minimum OpenGL version they need, so that part is easy. However, many extensions have been accepted and became core standard in subsequent OpenGL versions, but it is very hard to find when that happened. The only way I could find is to compare the full OpenGL standards document for each version. On a related note, I would also very much like to know which extensions/features are supported by which hardware, to help me decide what features I can safely use in my game, and which ones I need to make optional. For example, a big honkin' table like this: MAX_TEXTURE_IMAGE_UNITS MAX_VERTEX_TEXTURE_IMAGE_UNITS ... GeForce 6xxx 8 4 GeForce 7xxx 16 8 ATi x300 8 4 ... (Again, I'm making the values up.) The table could list hardware limitations from glGet but also support for particular extensions, and limitations of such extension support (e.g. what floating-point texture formats are supported in hardware). Any pointers to these or similar resources would be hugely appreciated!

    Read the article

  • Cross-Origin Resource Sharing (CORS) - am I missing something here?

    - by David Semeria
    I was reading about CORS (https://developer.mozilla.org/en/HTTP_access_control) and I think the implementation is both simple and effective. However, unless I'm missing something, I think there's a big part missing from the spec. As I understand, it's the foreign site that decides, based on the origin of the request (and optionally including credentials), whether to allow access to its resources. This is fine. But what if malicious code on the page wants to POST a user's sensitive information to a foreign site? The foreign site is obviously going to authenticate the request. Hence, again if I'm not missing something, CORS actually makes it easier to steal sensitive information. I think it would have made much more sense if the original site could also supply an immutable list of servers its page is allowed to access. So the expanded sequence would be: 1) Supply a page with list of acceptable CORS servers (abc.com, xyz.com, etc) 2) Page wants to make an XHR request to abc.com - the browser allows this because it's in the allowed list and authentication proceeds as normal 3) Page wants to make an XHR request to malicious.com - request rejected locally (ie by the browser) because the server is not in the list. I know that malicious code could still use JSONP to do its dirty work, but I would have thought that a complete implementation of CORS would imply the closing of the script tag multi-site loophole. I also checked out the official CORS spec (http://www.w3.org/TR/cors) and could not find any mention of this issue.

    Read the article

  • Should developers *really* have private offices?

    - by Aron Rotteveel
    We will probably be moving within a year, so we have to make some decisions regarding office layout. At the moment, our company is basically one big office. When our developers can't bother to be disturbed at all, we all have our own headphones to mute the outside world. Still, it seems a lot of people feel that private offices are no doubt the way to go. From Joel's article Private Offices Redux: Not every programmer in the world wants to work in a private office. In fact quite a few would tell you unequivocally that they prefer the camaradarie and easy information sharing of an open space. Don't fall for it. They also want M&Ms for breakfast and a pony. Open space is fun but not productive. Even though I can understand the benefit on productivity, does having a private office really result in more net productivity? There seem to be plenty of companies that create wide open spaces and still maintain good productivity. Or so it seems. (I should mention many of them use cubicles, though) What is your opinion on this? What does your company do? Is there some middle ground in this? Some more related information on this matter: Private Offices Redux The new Fog Creek office A Field Guide to Developers Gmail recruitment page. Found this last one somewhat remarkable since the Gmail recruitment page promotes the "wide open space" idea.

    Read the article

  • What happens when we say "listen to a port" ?

    - by smwikipedia
    Hi, When we start a server application, we always need to speicify the port number it listens to. But how is this "listening mechanism" implemented under the hood? My current imagination is like this: The operating system associate the port number with some buffer. The server application's responsibiligy is to monitor this buffer. If there's no data in this buffer, the server application's listen operation will just block the application. When some data arrives from the wire, the operating system will know that check the data and see if it is targed at this port number. And then it will fill the buffer. And then OS will notify the blocked server application and the server application will get the data and continue to run. Question is: If the above scenario is correct, how could the opearting system know there's data arriving from wire? It cannot be a busy pooling. Is it some kind of interrupt-based mechanism? If there's too much data arriving and the buffer is not big enough, will there be data loss? Is the "listen to a port" operation really a blocking operation? Many thanks.

    Read the article

  • Bash PATH: How long is too long?

    - by ajwood
    Hi, I'm currently designing a software quarantine pattern to use on Ubuntu. I'm not sure how standard "quarantine" is in this context, so here is what I hope to accomplish... Inside a particular quarantine is all of the stuff one needs to run an application (bin, share, lib, etc.). Ideally, the quarantine has no leaks, which means it's not relying on any code outside of itself on the system. A quarantine can be defined as a set of executables (and some environment settings needed to make them run). I think it will be beneficial to separate the built packages enough such that upgrading to a newer version of the quarantine won't require rebuilding the whole thing. I'll be able to update just a few packages, and then the new quarantine can use some of old parts and some of the new parts. One issue I'm wondering about is the environment variables I'll be setting up to use a particular quarantines. Is there a hard limit on how big PATH can be? (either in number of characters, or in the number of directories it contains) Might a path be so long that it affects performance? Thanks very much, Andrew p.s. Any other wisdom that might help my design would be greatly appreciated :)

    Read the article

  • Deploy to web container, bundle web container or embed web container...

    - by Jason
    I am developing an application that needs to be as simple as possible to install for the end user. While the end users will likely be experience Linux users (or sales engineers), they don't really know anything about Tomcat, Jetty, etc, nor do I think they should. So I see 3 ways to deploy our applications. I should also state that this is the first app that I have had to deploy that had a web interface, so I haven't really faced this question before. First is to deploy the application into an existing web container. Since we only deploy to Suse or RedHat this seems easy enough to do. However, we're not big on the idea of multiple apps running in one web container. It makes it harder to take down just one app. The next option is to just bundle Tomcat or Jetty and have the startup/shutdown scripts launch our bundled web container. Or 3rd, embed.. This will probably provide the same user experience as the second option. I'm curious what others do when faced with this problem to make it as fool proof as possible on the end user. I've almost ruled out deploying into an existing web container as we often like to set per application resource limits and CPU affinity, which I believe would affect all apps deployed into a web container/app server and not just a specific application. Thank you.

    Read the article

  • Javascript storing properties and functions in variables

    - by richard
    Hello, I'm having trouble with my programming style and I hope to get some feedback here. I recently bought Javascript: The Good Parts and while I find this a big help, I'm still having trouble designing this application. Especially when it comes to writing function and methods. Example: I have a function that let's the user switches games in my app. This function updates game-specific information in the current view. var games = { active: Titanium.App.Properties.getString('active_game'), gameswitcher_positions: { 'Game 1': 0, 'Game 2': 1, 'Game 3': 2, 'Game 4': 3, 'Game 5': 4 }, change: function(game) { if (active_game !== game) { gameswitcher.children[this.gameswitcher_positions[this.active]].backgroundImage = gameswitcher.children[this.gameswitcher_positions[this.active]].backgroundImage.replace('_selected', ''); gameswitcher.children[this.gameswitcher_positions[game]].backgroundImage = gameswitcher.children[this.gameswitcher_positions[game]].backgroundImage.replace('.png', '_selected.png'); events.update(game); this.active = game; } }, init: function() { gameswitcher.children[this.gameswitcher_positions[this.active]].backgroundImage = gameswitcher.children[this.gameswitcher_positions[this.active]].backgroundImage.replace('.png', '_selected.png'); events.update(this.active); } }; gameswitcher is a container view which contains buttons to switch games. I am not satisfied with this approach but I cannot think of a better one. Should I place the gameswitcher_positions outside of the variable in a seperate variable instead of as a property? And what about the active game? Please give me feedback, what am I doing wrong?

    Read the article

  • My project is no longer used - how should I feel?

    - by flybywire
    For the last two years I have been developing and supporting an important project for a big customer. The project included mining data from the customer's existing systems, processing, and displaying and updating in the customer's public home page. The project was defined as crucial by the customer and I was payed good money and flown at the customer's expense to meet key employees. Some months ago, when the project was finished and in maintainance mode, I informed the customer that I am no longer interested in doing it as I had a new opportunity that would not be compatible with my existing customer. I was payed to train one of their employees, flown to meet him, make sure everything works and that he can be safely left in charge of the project. We finished in good terms after I complied with all my obligations and they payed me all they owed me. Some days ago, just out of curiosity, I entered to their website to see how the data continues to be updated and much to my dismay I discovered that the day after my contract was finished my system was "turned off" and it ceased to feed data to the public website. Let's put it clear, there is no issue of money or broken contract here. They are in they full right to do whatever they want with my software. But it is an issue of broken "programmer's ego". Should I feel bad about it (I do). Should I care and check out with my customer if they need some help? Or is it none of my matters?

    Read the article

  • Optimally place a pie slice in a rectangle.

    - by Lisa
    Given a rectangle (w, h) and a pie slice with start angle and end angle, how can I place the slice optimally in the rectangle so that it fills the room best (from an optical point of view, not mathematically speaking)? I'm currently placing the pie slice's center in the center of the rectangle and use the half of the smaller of both rectangle sides as the radius. This leaves plenty of room for certain configurations. Examples to make clear what I'm after, based on the precondition that the slice is drawn like a unit circle: A start angle of 0 and an end angle of PI would lead to a filled lower half of the rectangle and an empty upper half. A good solution here would be to move the center up by 1/4*h. A start angle of 0 and an end angle of PI/2 would lead to a filled bottom right quarter of the rectangle. A good solution here would be to move the center point to the top left of the rectangle and to set the radius to the smaller of both rectangle sides. This is fairly easy for the cases I've sketched but it becomes complicated when the start and end angles are arbitrary. I am searching for an algorithm which determines center of the slice and radius in a way that fills the rectangle best. Pseudo code would be great since I'm not a big mathematician.

    Read the article

  • Distributed Cache with Serialized File as DataStore in Oracle Coherence

    - by user226295
    Weired but I am investigating the Oracle Coherence as a substitue for distribute cache. My primarr problem is that we dont have distribituted cache as such as of now in our app. Thats my major concern. And thats what I want to implement. So, lets say if I take up a machine and start a new (3rd) reading process, it will be able to connect to the cache and listen to the cache and will have a full set of cache triplicated (as of now its duplicated) Now thats waste from a common person stanpoint too. The size of the cache is 2 GB and without going distibuted its limiting us. Thats bring me to Coheremce. But now, we dont have database as persistent store too. we have the archival processes as our persistent store. (90 days worth of data) Ok now multiply that with soem where around 2 GB * 90 (thats the bare minimum we want to keep). Preliminary/Intermediate analysis of Coherence as a solution. And a (supposedly) brilliant thought crossed my mind. Why not have this as persistant storage with my distributed cache. Does Oracle Coherence support that. I will get rid of archiving infrastructure too (i hate daemon archiving processes). For some starnge reasons, I dont wanna go to the DB to replace those flat files. What say?, can Coherence be my savior? Any other stable alternate too. (Coherence is imposed on me by big guys, FYI)

    Read the article

  • using spring, hibernate and scala, is there a better way to load test data than dbunit?

    - by egervari
    Here are some things I really dislike about dbunit: 1) You cannot specify the exact ordering the inserts because dbunit likes to group your inserts by table name, and not by the order you define them in the XML file. This is a problem when you have records depending on other records in other tables, so you have to disable foreign key constraints during your tests... which actually sucks because these foreign key constraints will get fired in production while your tests won't be aware of them! 2) They seem hellbent on forcing you to use an xml namespace to define your xml... and I honestly can't be bothered to do this. I like the data.xml without any namespace. It works. But they are so hellbent on deprecating it. 3) Creating different xml files is hard on a per test basis, so it actually encourages creating data for your entire app. Unfortunately, this process is a little bloated too once the data grows in size and things get inter tangled. There has got to be a better way to split up your test data into chunks without having to copy/paste a lot of the test data across all of your tests. 4) Keeping track of id references in a big xml file is just impossible. If you have 130 domain classes, it just gets bewildering. This model simply does not scale. Is there something less bloated and better in the Spring/Hibernate space? db unit has worn out its welcome and I'm really looking for something better.

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >