Search Results

Search found 18092 results on 724 pages for 'matt long'.

Page 581/724 | < Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >

  • How do I protect the trunk from hapless newbies?

    - by Michael Haren
    A coworker relayed the following problem, let's say it's fictional to protect the guilty: A team of 5-10 works on a project which is issue-driven. That is, the typical flow goes like this: a chunk of work (bug, enhancement, etc.) is created as an issue in the issue tracker The issue is assigned to a developer The developer resolves the issue and commits their code changes to the trunk At release time, the frozen, and heavily tested trunk or release branch or whatever is built in release mode and released The problem he's having is that a couple newbies made several bad commits that weren't caught due to an unfortunate chain of events. This was followed by a bad release with a rollback or flurry of hot fixes. One idea we're toying with: Revoke commit access to the trunk for newbies and make them develop on a per-developer branch (we're using SVN): Good: newbies are isolated and can't hurt others Good: committers merge newbie branches with the trunk frequently Good: this enforces rigid code reviews Bad: this is burdensome on the committers (but there's probably no way around it since the code needs reviewed!) Bad: it might make traceability of trunk changes a little tougher since the reviewer would be doing the commit--not too sure on this. Update: Thank you, everyone, for your valuable input. I have concluded that this is far less a code/coder problem than I first presented. The root of the issue is that the release procedure failed to capture and test some poor quality changes to the trunk. Plugging that hole is most important. Relying on the false assumption that code in the trunk is "good" is not the solution. Once that hole--testing--is plugged, mistakes by everyone--newbie or senior--will be caught properly and dealt with accordingly. Next, a greater emphasis on code reviews and mentorship (probably driven by some systematic changes to encourage it) will go a long way toward improving code quality. With those two fixes in place, I don't think something as rigid or draconian as what I proposed above is necessary. Thanks!

    Read the article

  • ByteFlow installation Error on Windows

    - by Patrick
    Hi Folks, When I try to install ByteFlow on my Windows development machine, I got the following MySQL error, and I don't know what to do, please give me some suggestion. Thank you so much!!! E:\byteflow-5b6d964917b5>manage.py syncdb !!! Read about DEBUG in settings_local.py and then remove me !!! !!! Read about DEBUG in settings_local.py and then remove me !!! J:\Program Files\Python26\lib\site-packages\MySQLdb\converters.py:37: DeprecationWarning: the sets module is deprecated from sets import BaseSet, Set Creating table auth_permission Creating table auth_group Creating table auth_user Creating table auth_message Creating table django_content_type Creating table django_session Creating table django_site Creating table django_admin_log Creating table django_flatpage Creating table actionrecord Creating table blog_post Traceback (most recent call last): File "E:\byteflow-5b6d964917b5\manage.py", line 11, in <module> execute_manager(settings) File "J:\Program Files\Python26\lib\site-packages\django\core\management\__init__.py", line 362, in execute_manager utility.execute() File "J:\Program Files\Python26\lib\site-packages\django\core\management\__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "J:\Program Files\Python26\lib\site-packages\django\core\management\base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "J:\Program Files\Python26\lib\site-packages\django\core\management\base.py", line 222, in execute output = self.handle(*args, **options) File "J:\Program Files\Python26\lib\site-packages\django\core\management\base.py", line 351, in handle return self.handle_noargs(**options) File "J:\Program Files\Python26\lib\site-packages\django\core\management\commands\syncdb.py", line 78, in handle_noargs cursor.execute(statement) File "J:\Program Files\Python26\lib\site-packages\django\db\backends\util.py", line 19, in execute return self.cursor.execute(sql, params) File "J:\Program Files\Python26\lib\site-packages\django\db\backends\mysql\base.py", line 84, in execute return self.cursor.execute(query, args) File "J:\Program Files\Python26\lib\site-packages\MySQLdb\cursors.py", line 166, in execute self.errorhandler(self, exc, value) File "J:\Program Files\Python26\lib\site-packages\MySQLdb\connections.py", line 35, in defaulterrorhandler raise errorclass, errorvalue _mysql_exceptions.OperationalError: (1071, 'Specified key was too long; max key length is 767 bytes')

    Read the article

  • PHP/MySQL - Working with two databases, one shared and one local to an instance of application

    - by Extrakun
    The situation: Using a off-the-shelf PHP application, I have to add in a new module for extra functionality. Today, it is made known that eventually four different instances of the application are to be deployed, but the data from the new functionality is to be shared among those 4 instances. Each instance should still have their own database for users, content and etc. So the data for the new functionality goes into a 'shared' database. The data for the application (user login, content, uploads) go into a 'local' database To make things more complex, the new module I am writing will fetch data from the local DB and the shared DB at the same time. A re-write of the base application will take too long. I only have control over the new module which I am writing. The ideal solution: Is there a way to encapsulate 2 databases into one name using MySQL? I do not wish to switch DB connections or specifically name the DB to query from inside my SQL statements. The application uses a DB wrapper, so I am able to change it somehow so I can invisibly attempt to read/write to two different DB. What is the best way to handle this problem?

    Read the article

  • How do I store a string longer than 4000 characters in an Oracle Database using Java/JDBC?

    - by Ventrue
    I’m not sure how to use Java/JDBC to insert a very long string into an Oracle database. I have a String which is greater than 4000 characters, lets say it’s 6000. I want to take this string and store it in an Oracle database. The way to do this seems to be with the CLOB datatype. Okay, so I declared the column as description CLOB. Now, when it comes time to actually insert the data, I have a prepared statement pstmt. It looks like pstmt = conn.prepareStatement(“INSERT INTO Table VALUES(?)”). So I want to use the method pstmt.setClob(). However, I don’t know how to create a Clob object with my String in it; there’s no constructor (presumably because it can be potentially much larger than available memory). How do I put my String into a Clob? Keep in mind I’m not a very experienced programmer; please try to keep the explanations as simple as possible. Efficiency, good practices, etc. are not a concern here, I just want the absolute easiest solution. I’d like to avoid downloading other packages if it all possible; right now I’m just using JDK 1.4 and what is labelled “ojdbc14.jar”. I've looked around a bit but I haven't been able to follow any of the explanations I've found. If you have a solution that doesn’t use Clobs, I’d be open to that as well, but it has to be one column.

    Read the article

  • Printing to different printers using mozilla.

    - by Nick-ACNB
    I am currently creating a web application that will be deployed in an intranet environment. I chose firefox to be the browser that will run it. However, in the application I am building, I need to be able to print to different printers quickly since they use different paper size depending on what client is coming. To avoid many time-wasting mistakes that could occur, for instance someone choosing the wrong printer and wasting paper. Also, the time used to find the right printer for the job and then pressing print is considered too long in the current context. Is there any solution to this problem? I understand the potential security flaw behind this, but please be aware that this is solely an intranet project and that I can reduce the browser's security to the lowest since they don't access internet. I know there could be something doable behind IE (ActiveX or VBScript) but I am using firefox. Also, I guess there could also be something rather tricky that when you press print on the browser, it saves what needs to be printed to a DB and then there is an exe app that runs and fetch that DB every set ammount of time and print to the right printer. Any suggestion would be greatly appreciated. I doubt I am the only one to ever face this issue! :) Thank you very much.

    Read the article

  • Ideas on frameworks in .NET that can be used for job processing and notifications

    - by Rajat Mehta
    Scenario: We have one instance of WCF windows service which exposes contracts like: AddNewJob(Job job), GetJobs(JobQuery query) etc. This service is consumed by 70-100 instances of client which is Windows Form based .NET app. Typically the service has 50-100 inward calls/minute to add or query jobs that are stored in a table on Sql Server. The same service is also responsible for processing these jobs in real time. It queries database every 5 seconds picks up the queued jobs and starts processing them. A job has 6 states. Queued, Pre-processing, Processing, Post-processing, Completed, Failed, Locked. Another responsibility on this service is to update all clients on every state change of every job. This means almost 200+ callbacks to clients per second. Question: This whole implementation is done using WCF Duplex bindings and works perfectly fine on small number of parallel jobs. Problem arises when we scale it up to 1000 jobs at a time. The notifications don't work as expected, it leads to memory overflow etc. Is there any standard framework that can provide a clean infrastructure for handling this scenario?? Apologies for the long explanation!

    Read the article

  • Generating two thumbnails from the same image in Django

    - by Titus
    Hello, this seems like quite an easy problem but I can't figure out what is going on here. Basically, what I'd like to do is create two different thumbnails from one image on a Django model. What ends up happening is that it seems to be looping and recreating the same image (while appending an underscore to it each time) until it throws up an error that the filename is to big. So, you end up something like: OSError: [Errno 36] File name too long: 'someimg________________etc.jpg' Here is the code: def save(self, *args, **kwargs): if self.image: iname = os.path.split(self.image.name)[-1] fname, ext = os.path.splitext(iname) tlname, tsname = fname + '_thumb_l' + ext, fname + '_thumb_s' + ext self.thumb_large.save(tlname, make_thumb(self.image, size=(250,250))) self.thumb_small.save(tsname, make_thumb(self.image, size=(100,100))) super(Artist, self).save(*args, **kwargs) def make_thumb(infile, size=(100,100)): infile.seek(0) image = Image.open(infile) if image.mode not in ('L', 'RGB'): image.convert('RGB') image.thumbnail(size, Image.ANTIALIAS) temp = StringIO() image.save(temp, 'png') return ContentFile(temp.getvalue()) I didn't show imports for the sake of brevity. Assume there are two ImageFields on the Artist model: thumb_large, and thumb_small. If this isn't the correct way to do it, I'd appreciate any feedback. Thanks!

    Read the article

  • Prevent TEXTAREAs scroll by themselves on IE8

    - by Justin Grant
    IE8 has a known bug (per connect.microsoft.com) where typing or pasting text into a TEXTAREA element will cause the textarea to scroll by itself. This is hugely annoying and shows up in many community sites, including Wikipedia. The repro is this: open the HTML below with IE8 (or use any long page on wikipedia which will exhibit the same problem until they fix it) size the browser full-screen paste a few pages of text into the TEXTAREA move the scrollbar to the middle position now type one character into the textarea Expected: nothing happens Actual: scrossing happens on its own, and the insertion point ends up near the bottom of the textarea! Below is repro HTML (can also see this live on the web here: http://en.wikipedia.org/w/index.php?title=Text_box&action=edit) <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <body> <div style="width: 80%"> <textarea rows="20" cols="80" style="width:100%;" ></textarea> </div> </body> </html>

    Read the article

  • EC2 persistence of machine

    - by Seagull
    I want to 'persist' my Amazon EC2 images. My scenario: I have a range of Windows and Linux machines Some machines are EBS backed, whereas others are S3 backed. I need to be able to persist a machine (put it to sleep), preferably keeping all settings active I had them when the machine was running. I need to be able to quickly wake up a machine from sleep [Ideally with an SLA of less than 2 min to turn-on, if such an SLA is available with Amazon]. Here's the stuff that confuses me: AWS allows me to put EBS backed machines to sleep, but not S3 backed. I believe I can put S3 machines into some sort of persistence mode. But this involves shutting down the machine, writing it to S3 storage and then recovering from there (not a real sleep mode, but at least I don't continue to get billed for CPU). S3 backing seems to take a long time to either writing a machine to disk, or to recover (turn on a machine). I can't tell immediately which machines are EBS backed and which are S3 backed? It seems like I can instantiate either type, but it's not immediately clear how Amazon decided whether a given machine should be EBS or S3 backed. Advice?

    Read the article

  • how to create trackballevent in android custom adapter?

    - by UMMA
    dear friends, i am using following code to create custom adapter for listview. now i want to use trackball click event in it but i dont know how to do that can any one help me out in creating ontracballevent in custom adapter? i have tried writing few lines but not able to solve it. public class EfficientAdapter extends BaseAdapter implements Filterable { private LayoutInflater mInflater; private Context context; int pos; public EfficientAdapter(Context context) { mInflater = LayoutInflater.from(context); this.context = context; } public View getView(final int position, View convertView, ViewGroup parent) { ViewHolder holder; convertView = mInflater.inflate(R.layout.adaptor_contentposts, null); convertView.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { //click functionality } }); MotionEvent event= MotionEvent.CREATOR.createFromParcel(null); switch (event.getAction()) { case MotionEvent.ACTION_DOWN: //display click message } convertView.onTrackballEvent(event); return convertView; } class ViewHolder { TextView textLine; TextView textLine2; TextView PostedByAndPostedOn; ImageButton ImgButton; } @Override public Filter getFilter() { return null; } @Override public long getItemId(int position) { return 0; } @Override public int getCount() { return ad_id.length; } @Override public Object getItem(int position) { return ad_id[position]; } }

    Read the article

  • VS 2008 C++ build output?

    - by STingRaySC
    Why when I watch the build output from a VC++ project in VS do I see: 1Compiling... 1a.cpp 1b.cpp 1c.cpp 1d.cpp 1e.cpp [etc...] 1Generating code... 1x.cpp 1y.cpp [etc...] The output looks as though several compilation units are being handled before any code is generated. Is this really going on? I'm trying to improve build times, and by using pre-compiled headers, I've gotten great speedups for each ".cpp" file, but there is a relatively long pause during the "Generating Code..." message. I do not have "Whole Program Optimization" nor "Link Time Code Generation" turned on. If this is the case, then why? Why doesn't VC++ compile each ".cpp" individually (which would include the code generation phase)? If this isn't just an illusion of the output, is there cross-compilation-unit optimization potentially going on here? There don't appear to be any compiler options to control that behavior (I know about WPO and LTCG, as mentioned above). EDIT: The build log just shows the ".obj" files in the output directory, one per line. There is no indication of "Compiling..." vs. "Generating code..." steps. EDIT: I have confirmed that this behavior has nothing to do with the "maximum number of parallel project builds" setting in Tools - Options - Projects and Solutions - Build and Run. Nor is it related to the MSBuild project build output verbosity setting. Indeed if I cancel the build before the "Generating code..." step, the ".obj" files will not exist for the most recent set of "compiled" files. E.g., if I cancel the build during "c.cpp" above, I will see only "a.obj" and "b.obj".

    Read the article

  • What options exist to get text/ list items to appear in columns?

    - by alex
    I have a bunch of HTML markup coming from an external source, and it is mainly h3 elements and ul elements. I want to have the content flow in 3 columns. I know there is something coming in CSS3, but what options do I have to get this content to flow nicely into the 3 columns at the time being? I'm not that concerned about IE6 (as long as it degrades gracefully). Am I stuck using jQuery to parse the markup and chop it up into 3 divs which float? Thank you Update As per request, here is some of the HTML I am working with <h3>Tourism Industry</h3> <ul> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> </ul> <h3>Small Business</h3> <ul> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> </ul> And a whole lot more following the same format.

    Read the article

  • Have I found a security problem in an API or do I just not understand SSL?

    - by jamieb
    I'm working on building a set of Python bindings around an XML-based API provided by a vendor. The vendor requires that all transactions be conducted over SSL. Using a Linux box, I created a key file and a CSR for my application. Using their self-service web portal, I then generate a certificate using that CSR. Both the key file and the certificate are used when making the SSL request to the API. I'm now working on designing exception classes to make error messages more verbose (and, hopefully, more useful to developers using my bindings). Part of my testing has included altering the key file: transpose a couple characters here, replace 4 or 5 with random characters there, etc. To my surprise, altering the key file had no effect! As long as I didn't change the total length of it, the API didn't complain about a bad key file. The only way I was able to throw an error was by swapping in a completely different key from another application. At that point, the API complained about the Common Name not matching. Is this normal behavior or has the vendor not properly implemented SSL?

    Read the article

  • Indentation control while developing a small python like language

    - by sap
    Hello, I'm developing a small python like language using flex, byacc (for lexical and parsing) and C++, but i have a few questions regarding scope control. just as python it uses white spaces (or tabs) for indentation, not only that but i want to implement index breaking like for instance if you type "break 2" inside a while loop that's inside another while loop it would not only break from the last one but from the first loop as well (hence the number 2 after break) and so on. example: while 1 while 1 break 2 'hello world'!! #will never reach this. "!!" outputs with a newline end 'hello world again'!! #also will never reach this. again "!!" used for cout end #after break 2 it would jump right here but since I don't have an "anti" tab character to check when a scope ends (like C for example i would just use the '}' char) i was wondering if this method would the the best: I would define a global variable, like "int tabIndex" on my yacc file that i would access in my lex file using extern. then every time i find a tab character on my lex file i would increment that variable by 1. when parsing on my yacc file if i find a "break" keyword i would decrement by the amount typed after it from the tabIndex variable, and when i reach and EOF after compiling and i get a tabIndex != 0 i would output compilation error. now the problem is, whats the best way to see if the indentation got reduced, should i read \b (backspace) chars from lex and then reduce the tabIndex variable (when the user doesn't use break)? another method to achieve this? also just another small question, i want every executable to have its starting point on the function called start() should i hardcode this onto my yacc file? sorry for the long question any help is greatly appreciated. also if someone can provide an yacc file for python would be nice as a guideline (tried looking on Google and had no luck). thanks in advance.

    Read the article

  • Method for defining simultaneous has-many and has-one associations between two models in CakePHP?

    - by Hobonium
    One thing with which I have long had problems, within the CakePHP framework, is defining simultaneous hasOne and hasMany relationships between two models. For example: BlogEntry hasMany Comment BlogEntry hasOne MostRecentComment (where MostRecentComment is the Comment with the most recent created field) Defining these relationships in the BlogEntry model properties is problematic. CakePHP's ORM implements a has-one relationship as an INNER JOIN, so as soon as there is more than one Comment, BlogEntry::find('all') calls return multiple results per BlogEntry. I've worked around these situations in the past in a few ways: Using a model callback (or, sometimes, even in the controller or view!), I've simulated a MostRecentComment with: $this->data['MostRecentComment'] = $this->data['Comment'][0]; This gets ugly fast if, say, I need to order the Comments any way other than by Comment.created. It also doesn't Cake's in-built pagination features to sort by MostRecentComment fields (e.g. sort BlogEntry results reverse-chronologically by MostRecentComment.created. Maintaining an additional foreign key, BlogEntry.most_recent_comment_id. This is annoying to maintain, and breaks Cake's ORM: the implication is BlogEntry belongsTo MostRecentComment. It works, but just looks...wrong. These solutions left much to be desired, so I sat down with this problem the other day, and worked on a better solution. I've posted my eventual solution below, but I'd be thrilled (and maybe just a little mortified) to discover there is some mind-blowingly simple solution that has escaped me this whole time. Or any other solution that meets my criteria: it must be able to sort by MostRecentComment fields at the Model::find level (ie. not just a massage of the results); it shouldn't require additional fields in the comments or blog_entries tables; it should respect the 'spirit' of the CakePHP ORM. (I'm also not sure the title of this question is as concise/informative as it could be.)

    Read the article

  • Throwing a C++ exception after an inline-asm jump

    - by SoapBox
    I have some odd self modifying code, but at the root of it is a pretty simple problem: I want to be able to execute a jmp (or a call) and then from that arbitrary point throw an exception and have it caught by the try/catch block that contained the jmp/call. But when I do this (in gcc 4.4.1 x86_64) the exception results in a terminate() as it would if the exception was thrown from outside of a try/catch. I don't really see how this is different than throwing an exception from inside of some far-flung library, yet it obviously is because it just doesn't work. How can I execute a jmp or call but still throw an exception back to the original try/catch? Why doesn't this try/catch continue to handle these exceptions as it would if the function was called normally? The code: #include <iostream> #include <stdexcept> using namespace std; void thrower() { cout << "Inside thrower" << endl; throw runtime_error("some exception"); } int main() { cout << "Top of main" << endl; try { asm volatile ( "jmp *%0" // same thing happens with a call instead of a jmp : : "r"((long)thrower) : ); } catch (exception &e) { cout << "Caught : " << e.what() << endl; } cout << "Bottom of main" << endl << endl; } The expected output: Top of main Inside thrower Caught : some exception Bottom of main The actual output: Top of main Inside thrower terminate called after throwing an instance of 'std::runtime_error' what(): some exception Aborted

    Read the article

  • Is nested synchronized block necessary?

    - by Dan
    I am writing a multithreaded program and I have a method that has a nested synchronized blocks and I was wondering if I need the inner sync or if just the outer sync is good enough. public class Tester { private BlockingQueue<Ticket> q = new LinkedBlockingQueue<>(); private ArrayList<Long> list = new ArrayList<>(); public void acceptTicket(Ticket p) { try { synchronized (q) { q.put(p); synchronized (list) { if (list.size() < 5) { list.add(p.getSize()); } else { list.remove(0); list.add(p.getSize()); } } } } catch (InterruptedException ex) { Logger.getLogger(Consumer.class.getName()).log(Level.SEVERE, null, ex); } } } EDIT: This isn't a complete class as I am still working on it. But essentially I am trying to emulate a ticket machine. The ticket machine maintains a list of tickets in the BlockingQueue q. Whenever a client adds a ticket to the machine, the machine also keeps track of the price of the last 5 tickets (ArrayList list)

    Read the article

  • MySQL Normalization stored procedure performance

    - by srkiNZ84
    Hi, I've written a stored procedure in MySQL to take values currently in a table and to "Normalize" them. This means that for each value passed to the stored procedure, it checks whether the value is already in the table. If it is, then it stores the id of that row in a variable. If the value is not in the table, it stores the newly inserted value's id. The stored procedure then takes the id's and inserts them into a table which is equivalent to the original de-normailized table, but this table is fully normalized and consists of mainly foreign keys. My problem with this design is that the stored procedure takes approximately 10ms or so to return, which is too long when you're trying to work through some 10million records. My suspicion is that the performance is to do with the way in which I'm doing the inserts. i.e. INSERT INTO TableA (first_value) VALUES (argument_from_sp) ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id); SET @TableAId = LAST_INSERT_ID(); The "ON DUPLICATE KEY UPDATE" is a bit of a hack, due to the fact that on a duplicate key I don't want to update anything but rather just return the id value of the row. If you miss this step though, the LAST_INSERT_ID() function returns the wrong value when you're trying to run the "SET ..." statement. Does anyone know of a better way to do this in MySQL? Thank you

    Read the article

  • How to use the LINQ where expression?

    - by NullReference
    I'm implementing the service \ repository pattern in a new project. I've got a base interface that looks like this. Everything works great until I need to use the GetMany method. I'm just not sure how to pass a LINQ expression into the GetMany method. For example how would I simply sort a list of objects of type name? nameRepository.GetMany( ? ) public interface IRepository<T> where T : class { void Add(T entity); void Update(T entity); void Delete(T entity); void Delete(Expression<Func<T, bool>> where); T GetById(long Id); T GetById(string Id); T Get(Expression<Func<T, bool>> where); IEnumerable<T> GetAll(); IEnumerable<T> GetMany(Expression<Func<T, bool>> where); } public virtual IEnumerable<T> GetMany(Expression<Func<T, bool>> where) { return dbset.Where(where).ToList(); }

    Read the article

  • Using SVN post-commit hook to update only files that have been commited

    - by fondie
    I am using an SVN repository for my web development work. I have a development site set up which holds a checkout of the repository. I have set up an SVN post-commit hook so that whenever a commit is made to the repository the development site is updated: cd /home/www/dev_ssl /usr/bin/svn up This works fine but due to the size of the repository the updates take a long time (approx. 3 minutes) which is rather frustrating when making regular commits. What I'd like is to change the post-commit hook to only update those files/directories that have been committed but I don't know how to go about doing this. Updating the "lowest common directory" would probably be the best solution, e.g. If committing the follow files: /branches/feature_x/images/logo.jpg /branches/feature_x/css/screen.css It would update the directory: /branches/feature_x/ Can anyone help me create a solution that achieves this please? Thanks! Update: The repository and development site are located on the same server so network issues shouldn't be involved. CPU usage is very low, and I/O should be ok (it's running on hi-spec dedicated server) The development site is approx. 7.5GB in size and contains approx. 600,000 items, this is mainly due to having multiple branches/tags

    Read the article

  • Looking for combinations of server and embedded database engines

    - by codeelegance
    I'm redesigning an application that will be run as both a single user and multiuser application. It is a .NET 2.0 application. I'm looking for server and embedded databases that work well together. I want to deploy the embedded database in the single user setup and of course, the server in the multiuser setup. Past releases have been based on MSDE but in the past year we've been having a lot of install issues: new installs hanging and leaving the system in an unknown state, upgrades disconnecting the database, etc. I migrated the application to SQL Server 2005 and the install is more reliable (as long as a user doesn't try to install over a broken MSDE installation). Since next year's release will be a complete redesign I figured now's the best time to address the database issue as well. The database has been abstracted from the rest of the application so I just need to choose which database(s) to use and write an implementation for each one. So far I've considered: SQL Server/ SQL Server Compact Edition Firebird (same DB engine is available in two different server modes and an embedded dll) Each has its own merits but I'm also interested in any other suggestions. This is a fairly simple program and its data requirements are simple as well. I don't expect it to strain whatever database I eventually choose. So easy configuration and deployment hold more weight than performance.

    Read the article

  • New Rails deployment half working...not sure why?

    - by Meltemi
    I'm in the final stages of going round trip through the entire Rails cycle: development - test - production (on an external server). I'm very close...but seeing some errors with the production version and don't know enough about Rails' "magic" to troubleshoot it yet... all seems well and I can load my app by going to www.mydomain.com/rails and it seems to work...I can interact with my app and create new objects in my database. However, when I go to www.mydomain.com/rails/ (difference is the trailing slash) i get a webpage that just says "Index from public" on it?!? I don't know where that's coming from? index.html is long removed from public. this may or may not be related to why I can't access, say, the first record in one of my classes by calling its controller like so: www.mydomain.com/rails/mycontroller/1 . and yes, there are records in the database. anyway, something's askew. it works fine on development box. but, only half working on production server. anyone know what might be causing this? this app is running on the server box (not development) Installed Passenger on server to serve app via Apache modified my httpd.conf with Passenger's stuff (headache but finally got it to respond) loaded schema into the production database: rake db:schema:load RAILS_ENV=production

    Read the article

  • URL shortening: using inode as short name?

    - by Licky Lindsay
    The site I am working on wants to generate its own shortened URLs rather than rely on a third party like tinyurl or bit.ly. Obviously I could keep a running count new URLs as they are added to the site and use that to generate the short URLs. But I am trying to avoid that if possible since it seems like a lot of work just to make this one thing work. As the things that need short URLs are all real physical files on the webserver my current solution is to use their inode numbers as those are already generated for me ready to use and guaranteed to be unique. function short_name($file) { $ino = @fileinode($file); $s = base_convert($ino, 10, 36); return $s; } This seems to work. Question is, what can I do to make the short URL even shorter? On the system where this is being used, the inodes for newly added files are in a range that makes the function above return a string 7 characters long. Can I safely throw away some (half?) of the bits of the inode? And if so, should it be the high bits or the low bits? I thought of using the crc32 of the filename, but that actually makes my short names longer than using the inode. Would something like this have any risk of collisions? I've been able to get down to single digits by picking the right value of "$referencefile". function short_name($file) { $ino = @fileinode($file); // arbitrarily selected pre-existing file, // as all newer files will have higher inodes $ino = $ino - @fileinode($referencefile); $s = base_convert($ino, 10, 36); return $s; }

    Read the article

  • Publishing a WCF Server and client and their endpoints

    - by Ahmadreza
    Imagine developing a WCF solution with two projects (WCF Service/ and web application as WCF Client). As long as I'm developing these two projects in visual studio and referencing service to client (Web Application) as server reference there is no problem. Visual studio automatically assign a port for WCF server and configure all needed configuration including Server And Client binging to something like this in server: <service behaviorConfiguration="DefaultServiceBehavior" name="MYWCFProject.MyService"> <endpoint address="" binding="wsHttpBinding" contract="MYWCFProject.IMyService"> <identity> <dns value="localhost" /> </identity> </endpoint> <host> <baseAddresses> <add baseAddress="http://localhost:8731/MyService.svc" /> </baseAddresses> </host> </service> and in client: <client> <endpoint address="http://localhost:8731/MyService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IMyService" contract="MyWCFProject.IMyService" name="WSHttpBinding_IMyService"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> The problem is I want to frequently publish this two project in two different servers as my production servers and Service url will be "http://mywcfdomain/MyService.svc". I don't want to change config file every time I publish my server project. The question is: is there any feature in Visual Studio 2008 to automatically change the URLs or I have to define two different endpoints and I set them within my code (based on a parameter in my configuration for example Development/Published).

    Read the article

  • Hadoop/MapReduce: Reading and writing classes generated from DDL

    - by Dave
    Hi, Can someone walk me though the basic work-flow of reading and writing data with classes generated from DDL? I have defined some struct-like records using DDL. For example: class Customer { ustring FirstName; ustring LastName; ustring CardNo; long LastPurchase; } I've compiled this to get a Customer class and included it into my project. I can easily see how to use this as input and output for mappers and reducers (the generated class implements Writable), but not how to read and write it to file. The JavaDoc for the org.apache.hadoop.record package talks about serializing these records in Binary, CSV or XML format. How do I actually do that? Say my reducer produces IntWritable keys and Customer values. What OutputFormat do I use to write the result in CSV format? What InputFormat would I use to read the resulting files in later, if I wanted to perform analysis over them?

    Read the article

< Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >