Search Results

Search found 8543 results on 342 pages for 'documentation'.

Page 277/342 | < Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >

  • Does HttpListener work well on Mono?

    - by billpg
    Hi everyone. I'm looking to write a small web service to run on a small Linux box. I prefer to code in C#, so I'm looking to use Mono. I don't want the overhead of running a full web server or Mono's version of ASP.NET. I'm thinking of having a single process with a thread dealing with each client connection. Shared memory between threads instead of a database. I've read a little on Microsoft's version of HttpListener and how it works with the Http.sys driver. Alas, Mono's documentation on this class is just the automated class interface with no discussion of how it works under the hood. (Linux doesn't have Http.sys, so I imagine it's implemented substantially differently.) Could anyone point me towards some resources discussing this module please? Many thanks, Bill, billpg.com (A little background to my question for the interested.) Some time ago, I asked this question, interested in keeping a long conversation open with lots of back-and-forth. I had settled on designing my own ad-hoc protocol, but people I spoke to really wanted a REST interface, even at the cost of the "Okay, send your command now" signal. So, I wondered about running ASP.NET on a Linux/Mono server, but stumbled upon HttpListener. This seemed ideal, as each "conversation" could run in a separate thread. The thread that calls HttpListener in a loop can look for which thread each incomming connection is for and pass the reference to that thread. The alternative for an ASP.NET driven service, would be to have the ASPX code pick up the state from a database, and write back the new state when it finishes. Yes, it would work, but that's a lot of overhead.

    Read the article

  • Javascript with Django?

    - by Rosarch
    I know this has been asked before, but I'm having a hard time setting up JS on my Django web app, even though I'm reading the documentation. I'm running the Django dev server. My file structure looks like this: mysite/ __init__.py MySiteDB manage.py settings.py urls.py myapp/ __init__.py admin.py models.py test.py views.py templates/ index.html Where do I want to put the Javascript and CSS? I've tried it in a bunch of places, including myapp/, templates/ and mysite/, but none seem to work. From index.html: <head> <title>Degree Planner</title> <script type="text/javascript" src="/scripts/JQuery.js"></script> <script type="text/javascript" src="/media/scripts/sprintf.js"></script> <script type="text/javascript" src="/media/scripts/clientside.js"></script> </head> From urls.py: (r'^admin/', include(admin.site.urls)), (r'^media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': 'media'}) (r'^.*', 'mysite.myapp.views.index'), I suspect that the serve() line is the cause of errors like: TypeError at /admin/auth/ 'tuple' object is not callable Just to round off the rampant flailing, I changed these settings in settings.py: MEDIA_ROOT = '/media/' MEDIA_URL = 'http://127.0.0.1:8000/media'

    Read the article

  • nhibernate - mapping with contraints

    - by Tobias Müller
    Hello everybody, I am having a Problem with my nhibernate-mapping and I can't find a solution by searching on stackoverflow/google/documentation. The database I am using has (amongst others) two tables. One is unit with the following fields: id enduring_id starts ends damage_enduring_id [...] The other one is damage, which has the following fields: id enduring_id starts ends [...] The units are assigned to a damage and one damage can have zero, one or more units working on it. Every time a unit moves to annother damage, the dataset is copied. The field "ends" of the old record and "starts" of the new record are set to the current time stamp, enduring_id stays the same. So if I want to know which units were working on a damage at a certain time, I do the following select: select * from unit join damage on damage.enduring_id = unit.damage_enduring_id where unit.starts <= 'time' and unit.ends = 'time' (This is not an actualy query from the database, I made it up to make clear what I mean. The the real database is a little more complex) Now I want to map it that way, so I can load all the damages which are valid at one time (starts <= wanted time <= ends) and that each of them has a Bag with all the attached units at that time (again starts <= wanted time <= ends). Is this possible within the mapping? Sorry if this is a stupid question, but I am pretty new to nhibernate and I have no clue how to do it. Thanks a lot for reading my post! Bye, Tobias

    Read the article

  • SVN authz, path-based authentication woes

    - by Ronny
    [groups] developer = a,b,c doc = r,x [/doc] @doc = rw @developer = rw [/] @developer = rw * = If now a member of the group doc tries to check out the documentation, it does not work. I want members of doc just to be able to check out the sub-dir doc, anything else is forbidden. Any ideas howto achieve this? kind regards ronny [update] client: svn, version 1.5.4 (r33841) server: svn, Version 1.4.6 (r28521) access via svn+ssh:/user@host/fullpath-to-repos 1 perfectly works for two years 2 might be - see version numbers above (I'll contant our admin, immediatelly) 3 no? just ssh 4 nope 5 nope [update] using client version svn 1.4.6 (r28521) does not work either - same errors I use plain command line access. svn co svn+ssh://.... [update] server:Linux 2.6.16.60-0.39.3-default9 i686 athlon i386 GNU/Linux - suse 10? or something like that I think client: Kubuntu 9.04 connection via OpenSSH SSH client the server rejects svn:// connections from localhost - any connection --- gotta try it with a copy at home time soon [update 4] * this is not my own server, I cannot do what I want with it. It is a very old server 10 years at least running, with hundreds of users. Standard things should work. correct me if I am missing something. [update 5] believe it or not. I was using the wrong path and now everything works perfectly well, I am sorry to have wasted your time. I'll give the bounty to FoxyBOA for his efford.

    Read the article

  • How can I programmatically add more than just one view object to my view controller?

    - by BeachRunnerJoe
    I'm diving into iPhone OS development and I'm trying to understand how I can add multiple view objects to the "Left/Root" view of my SplitView iPad app. I've figured out how to programmatically add a TableView to that view based on the example code I found in Apple's online documentation... RootViewController.h @interface RootViewController : UITableViewController <NSFetchedResultsControllerDelegate, UITableViewDelegate, UITableViewDataSource> { DetailViewController *detailViewController; UITableView *tableView; NSFetchedResultsController *fetchedResultsController; NSManagedObjectContext *managedObjectContext; } RootViewController.m - (void)loadView { UITableView *newTableView = [[UITableView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame] style:UITableViewStylePlain]; newTableView.autoresizingMask = UIViewAutoresizingFlexibleHeight|UIViewAutoresizingFlexibleWidth; newTableView.delegate = self; newTableView.dataSource = self; [newTableView reloadData]; self.view = newTableView; [newTableView release]; } but there are a few things I don't understand about it and I was hoping you veterans could help clear up some confusion. In the statement self.view = newTableView, I assume I'm setting the entire view to a single UITableView. If that's the case, then how can I add additional view objects to that view alongside the table view? For example, if I wanted to have a DatePicker view object and the TableView object instead of just the TableView object, then how would I programmatically add that? Referencing the code above, how can I resize the table view to make room for the DatePicker view object that I'd like to add? Thanks so much in advance for your help! I'm going to continue researching these questions right now.

    Read the article

  • Does LaTeX have an array data structure?

    - by drasto
    Are there arrays in LaTeX? I don't mean the way to typeset arrays. I mean arrays as the data structure in LaTeX/TeX as a "programming language". I need to store a number of vbox-es or hbox-es in an array. It may be something like "an array of macros". More details: I have an environment that should typeset songs. I need to store some songs' paragraphs given as arguments to my macro \songparagraph (so I will not typeset them, just store those paragraphs). As I don't know how many paragraphs can be in one particular song I need an array for this. When the environment is closed, all the paragraphs will be typeset - but they will be first measured and the best placement for each paragraph will be computed (for example, some paragraphs can be put one aside the other in two columns to make the song look more compact and save some space). Any ideas would be welcome. Please, if you know about arrays in LaTeX, post a link to some basic documentation, tutorial or just state basic commands.

    Read the article

  • rewrite not a member of LiftRules

    - by José Leal
    Hi guys, I was following http://www.assembla.com/wiki/show/liftweb/URL_Rewriting tutorial for url rewritting in liftweb.. but I get this error: error: value rewrite is not a member of object net.liftweb.http.LiftRules .. it is really odd.. and the documentation says that it exists. I'm using idea IDE, and I've done everything from scratch, using the lift maven blank archifact. Some more info: [INFO] ------------------------------------------------------------------------ [INFO] Building Joseph3 [INFO] task-segment: [tomcat:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing tomcat:run [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 0 resource [INFO] [yuicompressor:compress {execution: default}] [INFO] nb warnings: 0, nb errors: 0 [INFO] artifact org.mortbay.jetty:jetty: checking for updates from scala-tools.org [INFO] artifact org.mortbay.jetty:jetty: checking for updates from central [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date [INFO] [scala:compile {execution: default}] [INFO] Checking for multiple versions of scala [INFO] /home/dpz/Scala/Doit/Joseph3/src/main/scala:-1: info: compiling [INFO] Compiling 2 source files to /home/dpz/Scala/Doit/Joseph3/target/classes at 1274922123910 [ERROR] /home/dpz/Scala/Doit/Joseph3/src/main/scala/bootstrap/liftweb/Boot.scala:16: error: value rewrite is not a member of object net.liftweb.http.LiftRules [INFO] LiftRules.rewrite.prepend(NamedPF("ProductExampleRewrite") { [INFO] ^ [ERROR] one error found [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 1(Exit value: 1) [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 19 seconds [INFO] Finished at: Thu May 27 03:02:07 CEST 2010 [INFO] Final Memory: 20M/175M [INFO] ------------------------------------------------------------------------ Process finished with exit code 1 enter code here

    Read the article

  • Can Haskell's Parsec library be used to implement a recursive descent parser with backup?

    - by Thor Thurn
    I've been considering using Haskell's Parsec parsing library to parse a subset of Java as a recursive descent parser as an alternative to more traditional parser-generator solutions like Happy. Parsec seems very easy to use, and parse speed is definitely not a factor for me. I'm wondering, though, if it's possible to implement "backup" with Parsec, a technique which finds the correct production to use by trying each one in turn. For a simple example, consider the very start of the JLS Java grammar: Literal: IntegerLiteral FloatingPointLiteral I'd like a way to not have to figure out how I should order these two rules to get the parse to succeed. As it stands, a naive implementation like this: literal = do { x <- try (do { v <- integer; return (IntLiteral v)}) <|> (do { v <- float; return (FPLiteral v)}); return(Literal x) } Will not work... inputs like "15.2" will cause the integer parser to succeed first, and then the whole thing will choke on the "." symbol. In this case, of course, it's obvious that you can solve the problem by re-ordering the two productions. In the general case, though, finding things like this is going to be a nightmare, and it's very likely that I'll miss some cases. Ideally, I'd like a way to have Parsec figure out stuff like this for me. Is this possible, or am I simply trying to do too much with the library? The Parsec documentation claims that it can "parse context-sensitive, infinite look-ahead grammars", so it seems like something like I should be able to do something here.

    Read the article

  • Cache consistency & spawning a thread

    - by Dave Keck
    Background I've been reading through various books and articles to learn about processor caches, cache consistency, and memory barriers in the context of concurrent execution. So far though, I have been unable to determine whether a common coding practice of mine is safe in the strictest sense. Assumptions The following pseudo-code is executed on a two-processor machine: int sharedVar = 0; myThread() { print(sharedVar); } main() { sharedVar = 1; spawnThread(myThread); sleep(-1); } main() executes on processor 1 (P1), while myThread() executes on P2. Initially, sharedVar exists in the caches of both P1 and P2 with the initial value of 0 (due to some "warm-up code" that isn't shown above.) Question Strictly speaking – preferably without assuming any particular CPU – is myThread() guaranteed to print 1? With my newfound knowledge of processor caches, it seems entirely possible that at the time of the print() statement, P2 may not have received the invalidation request for sharedVar caused by P1's assignment in main(). Therefore, it seems possible that myThread() could print 0. References These are the related articles and books I've been reading. (It wouldn't allow me to format these as links because I'm a new user - sorry.) Shared Memory Consistency Models: A Tutorial hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf Memory Barriers: a Hardware View for Software Hackers rdrop.com/users/paulmck/scalability/paper/whymb.2009.04.05a.pdf Linux Kernel Memory Barriers kernel.org/doc/Documentation/memory-barriers.txt Computer Architecture: A Quantitative Approach amazon.com/Computer-Architecture-Quantitative-Approach-4th/dp/0123704901/ref=dp_ob_title_bk

    Read the article

  • Apache CXF REST Services w/ Spring AOP

    - by jconlin
    I'm trying to get Apache CXF JAX-RS services working with Spring AOP. I've created a simple logging class: public class AOPLogger{ public void logBefore(){ System.out.println("Logging Before!"); } } My Spring configuration (beans.xml): <aop:config> <aop:aspect id="aopLogger" ref="test.aop.AOPLogger"> <aop:before method="logBefore" pointcut="execution(* test.rest.RestService(..))"/> </aop:aspect> </aop:config> <bean id="aopLogger" class="test.aop.AOPLogger"/> I always get an NPE in RestService when a call is made to a Method getServletRequest(), which has: return messageContext.getHttpServletRequest(); If I remove the aop configuration or comment it out from my beans.xml, everything works fine. All of my actual Rest services extend test.rest.RestService (which is a class) and call getServletRequest(). I'm just trying to just get AOP up and running based off of the example in the CXF JAX-RS documentation. Does anyone have any idea what I'm doing wrong? Thanks!

    Read the article

  • Does Ctypes Structures and POINTERS automatically free the memory when the Python object is deleted?

    - by jsbueno
    When using Python CTypes there are the Structures, that allow you to clone c-structures on the Python side, and the POINTERS objects that create a sofisticated Python Object from a memory address value and can be used to pass objects by reference back and forth C code. What I could not find on the documentation or elsewhere is what happens when a Python object containing a Structure class that was de-referenced from a returning pointer from C Code (that is - the C function alocated memory for the structure) is itself deleted. Is the memory for the original C structure freed? If not how to do it? Furthermore -- what if the Structure contains Pointers itself, to other data that was also allocated by the C function? Does the deletion of the Structure object frees the Pointers onits members? (I doubt so) Else - -how to do it? Trying to call the system "free" from Python for the Pointers in the Structure is crashing Python for me. In other words, I have this structure filled up by a c Function call: class PIX(ctypes.Structure): """Comments not generated """ _fields_ = [ ("w", ctypes.c_uint32), ("h", ctypes.c_uint32), ("d", ctypes.c_uint32), ("wpl", ctypes.c_uint32), ("refcount", ctypes.c_uint32), ("xres", ctypes.c_uint32), ("yres", ctypes.c_uint32), ("informat", ctypes.c_int32), ("text", ctypes.POINTER(ctypes.c_char)), ("colormap", ctypes.POINTER(PIXCOLORMAP)), ("data", ctypes.POINTER(ctypes.c_uint32)) ] And I want to free the memory it is using up from Python code.

    Read the article

  • rpy2: Converting a data.frame to a numpy array

    - by Mike Dewar
    I have a data.frame in R. It contains a lot of data : gene expression levels from many (125) arrays. I'd like the data in Python, due mostly to my incompetence in R and the fact that this was supposed to be a 30 minute job. I would like the following code to work. To understand this code, know that the variable path contains the full path to my data set which, when loaded, gives me a variable called immgen. Know that immgen is an object (a Bioconductor ExpressionSet object) and that exprs(immgen) returns a data frame with 125 columns (experiments) and tens of thousands of rows (named genes). robjects.r("load('%s')"%path) # loads immgen e = robjects.r['data.frame']("exprs(immgen)") expression_data = np.array(e) This code runs, but expression_data is simply array([[1]]). I'm pretty sure that e doesn't represent the data frame generated by exprs() due to things like: In [40]: e._get_ncol() Out[40]: 1 In [41]: e._get_nrow() Out[41]: 1 But then again who knows? Even if e did represent my data.frame, that it doesn't convert straight to an array would be fair enough - a data frame has more in it than an array (rownames and colnames) and so maybe life shouldn't be this easy. However I still can't work out how to perform the conversion. The documentation is a bit too terse for me, though my limited understanding of the headings in the docs implies that this should be possible. Anyone any thoughts?

    Read the article

  • wopen calls when porting to Linux

    - by laura
    I have an application which was developed under Windows, but for gcc. The code is mostly OS-independent, with very few classes which are Windows specific because a Linux port was always regarded as necessary. The API, especially that which gets called as a direct result of user interaction, is using wide char arrays instead of char arrays (as a side note, I cannot change the API itself - at this point, std::wstring cannot be used). These are considered as encoded in UTF-16. In some places, the code opens files, mostly using the windows-specific _wopen function call. The problem with this is there is no wopen-like substitute for Linux because Linux "only deals with bytes". The question is: how do I port this code ? What if I wanted to open a file with the name "something™.log", how would I go about doing so in Linux ? Is a cast to char* sufficient, would the wide chars be picked up automatically based on the locale (probably not) ? Do I need to convert manually ? I'm a bit confused regarding this, perhaps someone could point me to some documentation regarding the matter.

    Read the article

  • Integrating Search Server 2008 Express with WSS 3.0

    - by Jason Kemp
    I'm setting up the environment for an intranet using WSS (Windows SharePoint Services) 3.0. The catch is getting the environment configured to work with MS Search Server 2008 Express. Here's the environment I'd like to setup: A: Web Server; Win Server 2003 SP2; WSS 3.0 SP2; IIS 6.0; .NET 3.5 SP1 B: Search Server; Win Server 2003 SP2; WSS 3.0 SP2; IIS 6.0; .NET 3.5 SP1; Search Server 2008 Express C: Database Server; Win Server 2003 SP2; SQL Server 2000 SP3 - Admin db, Content db, Config db, Search db The question is whether 3 servers can be used like the above configuration or if the Search Server (B) has to be combined with (A) since we're using the free Express version of the Search Server. The documentation from MS doesn't make it clear either way. I can attack this problem with trial and error but would rather not. The bigger question is: What is the best practice for a WSS / Search Server installation?

    Read the article

  • Setting Ringtone notification from SD card file

    - by sgarman
    My goal is to set the users notification sound from a file that is stored onto the SD card from with in the application. I am using this code: if(path != null){ File k = new File(path, "moment.mp3"); ContentValues values = new ContentValues(); values.put(MediaStore.MediaColumns.DATA, k.getAbsolutePath()); values.put(MediaStore.MediaColumns.TITLE, "My Song title"); values.put(MediaStore.MediaColumns.SIZE, 215454); values.put(MediaStore.MediaColumns.MIME_TYPE, "audio/mp3"); values.put(MediaStore.Audio.Media.ARTIST, "Some Artist"); values.put(MediaStore.Audio.Media.DURATION, 230); values.put(MediaStore.Audio.Media.IS_RINGTONE, false); values.put(MediaStore.Audio.Media.IS_NOTIFICATION, true); values.put(MediaStore.Audio.Media.IS_ALARM, false); values.put(MediaStore.Audio.Media.IS_MUSIC, false); values.put(MediaStore.MediaColumns.DISPLAY_NAME, "Some Name"); //Insert it into the database Uri uri = MediaStore.Audio.Media.getContentUriForPath(k.getAbsolutePath()); Uri newUri = MainActivity.this.getContentResolver().insert(uri, values); RingtoneManager.setActualDefaultRingtoneUri( MainActivity.this, RingtoneManager.TYPE_NOTIFICATION, newUri ); //RingtoneManager.setActualDefaultRingtoneUri(this, RingtoneManager.TYPE_NOTIFICATION, newUri); Toast.makeText(this, "Notification Ringtone Set", Toast.LENGTH_SHORT).show(); } When I run this on the device I keep getting the error: 06-12 15:19:36.741: ERROR/Database(2847): Error inserting is_alarm=false is_ringtone=false artist_id=35 is_music=false album_id=-1 title=My Song title duration=230 is_notification=true title_key=%D%\%%P%H%F%8%%R%<%R%B%4% mime_type=audio/mp3 date_added=1276370376 _display_name=moment.mp3 _size=215454 _data=/mnt/sdcard/Android/data/_MY APP PATH_/files/moment.mp3 06-12 15:19:36.741: ERROR/Database(2847): android.database.sqlite.SQLiteConstraintException: error code 19: constraint failed I have seen others using this technique and I can't find any documentation on which values actually need to be passed in to successfully add the file into the Android system so that it can be set as a notification.

    Read the article

  • Inactive area after device rotation

    - by Sébastien
    Hi all, I don't understand what's wrong in my very simple application with device rotation : I built my view with interface builder. (See screen capture here) I specified <key>UIInterfaceOrientation</key><string>UIInterfaceOrientationLandscapeRight</string> in my info.plist file. I had a (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation {return YES;} in my root view controller. The area on the left (shown in red on the capture), around 20 pixel width, keeps inactive (nothing append if I hit a button in this area). In fact the full screen is active only in portrait mode, in landscape right mode there is this 20 pixels width inactive area, in landscape left mode this inactive area is on the right, in portrait upside down mode this area is on the bottom. I read lots of posts and documentation about UIView rotation, but I did not find anything to solve this problem (I tried to play with view.frame and view.bounds without any success). Anybody has an idea ? Thanks a lot. Regards. Sébastien.

    Read the article

  • How to catch a carp-warning?

    - by sid_com
    I tried to catch a carp-warning ( carp "$start is $end" if (warnings::enabled()); ) with eval but it didn't work, so I looked in the eval-documentation and I discovered, that eval catches only syntax-errors, run-time-errors or executed die-statements. How could I catch a carp-warning? #!/usr/bin/env perl use warnings; use strict; use 5.012; use List::Util qw(max min); use Number::Range; my @array; my $max = 20; print "Input (max $max): "; my $in = <>; $in =~ s/\s+//g; $in =~ s/(?<=\d)-/../g; eval { my $range = new Number::Range( $in ); @array = sort { $a <=> $b } $range->range; }; if ( $@ =~ /\d+ is > \d+/ ) { die $@ }; # catch the carp-warning doesn't work die "Input greater than $max not allowed $!" if defined $max and max( @array ) > $max; die "Input '0' or less not allowed $!" if min( @array ) < 1; say "@array";

    Read the article

  • How can I exclude pages created from a specific template from the CQ5 dispatcher cache?

    - by Shawn
    I have a specific Adobe CQ5 (5.5) content template that authors will use to create pages. I want to exclude any page that is created from this template from the dispatcher cache. As I understand it currently, the only way I know to prevent caching is to configure dispatcher.any to not cache a particular URL. But in this case, the URL isn't known until a web author uses the template to create a page. I don't want to have to go back and modify dispatcher.any every time a page is created--or at least I want to automate this if there is no other way. I am using IIS for the dispatcher. The reason I don't want to cache the pages is because the underlying JSPs that render the content for these pages produce dynamic content, and the pages don't use querystrings and won't carry authentication headers. The pages will be created in unpredictable directories, so I don't know the URL pattern ahead of time. How can I configure things so that any page that is created from a certain template will be automatically excluded from the dispatcher cache? It seems like CQ ought to have some mechanism to respect HTTP response/caching headers. If the HTTP response headers specify that the response shouldn't be cached, it seems like the dispatcher shouldn't cache it--regardless of what dispatcher.any says. This is the CQ5 documentation I have been referencing.

    Read the article

  • How do I exclude data from local table schema_migrations from being pushed to Heroku DB?

    - by Thierry Lam
    I was able to push my Ruby on Rails app with MySQL(local dev) to the Heroku server along with migrating my model with the command heroku rake db:migrate. I have also read the documentation on Database Import/Export. Is that doc referring to pushing actual data from my local dev DB to whichever Heroku's DB? Do I need to modify anything in the file database.yml to make it happen? I ran the following command: heroku db:push and I am getting the error: Sending data 2 tables, 3 records !!! Caught Server Exception | ETA: --:--:-- Taps Server Error: PGError ERROR: duplicate key value violates unique constraint "unique_schema_migrations" I have 2 tables, one I create for my app and the other schema_migrations. The total number of entries among the 2 tables is 3. I'm also printing the number of entries I have in the table I have created and it's showing 0. Any ideas what I might be missing or what I am doing wrong? EDIT: I figured out the above, Heroku's DB already have schema_migrations the moment I ran migrate. New question: Does anyone know how I can exclude data from a specific table from being pushed to Heroku DB. The table to exclude in this case will be schema_migrations. Not so good solution: I googled around and someone else was having the same issue. He suggested naming the schema_migrations table to zschema_migrations. In this way data from the other tables will be pushed properly until it fails on the last table. It's a pretty bad solution but will do for the time being. A better solution will be to use an existing Rails command which can reset a specific table from a database. I don't think Rake can do that.

    Read the article

  • MongoDB - proper use of collections?

    - by zmg
    In Mongo my understanding is that you can have databases and collections. I'm working on a social-type app that will have blogs and comments (among other things) and had previously be using MySQL and pretty heavy partitioning in an attempt to limit possible concurrency issues. With MySQL I've stuffed all my user data into a _user database with several tables to further partition the data (blogs, pages, etc). My immediate reaction with Mongo would be to create a 'users' database with one collection per user. In this way user 'zach' blog entries would go into the 'zach' collection with associated comments and such becoming sub-objects in the same collection. Basically like dynamically creating one table per user in MySQL, but apparently without the complexity and limitations that might impose. Of course since I haven't really used Mongo before I'm having trouble gauging the (ahem..) quality of this idea and the potential problems it might cause down the road. I'd like user data to be treated a lot like a users directory in a *nix environment where user created/non-shared (mostly) gets put into one place (currently with MySQL that would be the appname_users as mentioned above). Most of the users data will be specific to the users page(s). Some of the user data which is queried across all site users (searchable user profiles) is currently kept in a separate database/table and I expect things like this could be put into a appname_system database and be broken up into collections and/or application specific databases (appname_profiles). Anyway, since the available documentation on this is currently a little thin and my experience is extremely limited I thought I might find a little guidance from someone with a better working understanding of the system. On the plus side I'd really already been attempting to treat MySQL as a schema-less document-store and doing this with Mongo seems much more intuitive/sane/rational so I'm really looking forward to getting started. Thanks, Zach

    Read the article

  • Rails 3 get raw post data and write it to tmp file

    - by Andrew
    I'm working on implementing Ajax-Upload for uploading photos in my Rails 3 app. The documentation says: For IE6-8, Opera, older versions of other browsers you get the file as you normally do with regular form-base uploads. For browsers which upload file with progress bar, you will need to get the raw post data and write it to the file. So, how can I receive the raw post data in my controller and write it to a tmp file so my controller can then process it? (In my case the controller is doing some image manipulation and saving to S3.) Some additional info: As I'm configured right now the post is passing these parameters: Parameters: {"authenticity_token"=>"...", "qqfile"=>"IMG_0064.jpg"} ... and the CREATE action looks like this: def create @attachment = Attachment.new @attachment.user = current_user @attachment.file = params[:qqfile] if @attachment.save! respond_to do |format| format.js { render :text => '{"success":true}' } end end end ... but I get this error: ActiveRecord::RecordInvalid (Validation failed: File file name must be set.): app/controllers/attachments_controller.rb:7:in `create'

    Read the article

  • How do you map a DateTime property to 2 varchar columns in the database with NHibernate (Fluent)?

    - by gabe
    I'm dealing with a legacy database that has date and time fields as char(8) columns (formatted yyyyMMdd and HH:mm:ss, respectively) in some of the tables. How can i map the 2 char columns to a single .NET DateTime property? I have tried the following, but i get a "can't access setter" error of course because DateTime Date and TimeOfDay properties are read-only: public class SweetPocoMannaFromHeaven { public virtual DateTime? FileCreationDateTime { get; set; } } . mapping.Component<DateTime?>(x => x.FileCreationDateTime, dt => { dt.Map(x => x.Value.Date, "file_creation_date"); dt.Map(x => x.Value.TimeOfDay, "file_creation_time"); }); I have also tried defining a IUserType for DateTime, but i can't figure it out. I've done a ton of googling for an answer, but i can't figure it out still. What is my best option to handle this stupid legacy database convention? A code example would be helpful since there's not much out for documentation on some of these more obscure scenarios.

    Read the article

  • Automatically grow document view of NSScrollView using auto layout?

    - by Monolo
    Is there a simple way to get an NSScrollView to adapt to its document view changing size when using autolayout (the Lion feature)? I have tried to call both setNeedsUpdateConstraints: and setNeedsLayout: on the document view, the clip view and the scroll view, without any results. fittingSize of the document view reports the correct size. An NSPopover in conjunction with an NSViewController handles this nicely, with the popover growing and shrinking as needed, and I was hoping to get a similar simple and robust behaviour with the scroll view. I have checked the documentation for scroll views, but they don't seem to be updated to use autolayout. Edited to clarify: The problem I experience is that the document view, which holds subviews, is not re-sized when the subviews change their size, even if they call invalidateIntrinsicContentSize. The contents of the document view are hence clipped to the original size of the document view as they grow. The document view is created in a nib and set as the scroll view's document view in an awakeFromBib method. What I hoped to obtain was that the document view frame would automatically be adjusted to when its fittingSize changes, and the scrollbars updated accordingly. NSPopover does something similar - provided that the subviews of the content controller's view have the constraints set right and various content hugging values are high enough (higher than the hidden popover window's hight constraint priority, for one).

    Read the article

  • C#: How to run NUnit from my code

    - by Flavio
    Hello, I'd like to use NUnit to run unit tests in my plug-in, but it needs to be run in the context of my application. To solve this, I was trying to develop a plug-in that runs NUnit, which in turn will execute my tests in the application's context. I didn't find a specific documentation on this subject so I dug a piece of information here and there and I came out with the following piece of code (which is similar to one I found here in StackOverflow): SimpleTestRunner runner = new SimpleTestRunner(); TestPackage package = new TestPackage( "Test" ); string loc = Assembly.GetExecutingAssembly().Location package.Assemblies.Add( loc ); if( runner.Load(package) ) { TestResult result = runner.Run( new NullListener() ); } The result variable says "has no TestFixture" although I know for sure it is there. In fact my test file contains two test. Using another approach I found, which is summarized by the following code: TestSuiteBuilder builder = new TestSuiteBuilder(); TestSuite testSuite = builder.Build( package ); // Run tests TestResult result = testSuite.Run( new NullListener(), NUnit.Core.TestFilter.Empty ); I saw nunit data structures with only 1 test and I had the same error. For sake of completeness, I am using the latest version of nunit, which is 2.5.5.10112. Does anyone know what I'm missing? A sample code would be appreciated. thanks

    Read the article

  • What are the benefits of the PHP the different PHP compression libraries?

    - by Christopher W. Allen-Poole
    I've been looking into ways to compress PHP libraries, and I've found several libraries which might be useful, but I really don't know much about them. I've specifically been reading about bcompiler and PHAR libraries. Is there any performance benefit in either of these? Are there any "gotchas" I need to watch out for? What are the relative benefits? Do either of them add to/detract from performance? I'm also interested in learning of other libs which might be out there which are not obvious in the documentation? As an aside, does anyone happen to know whether these work more like zip files which just happen to have the code in there, or if they operate more like Python's pre-compiling which actually runs a pseudo-compiler? ======================= EDIT ======================= I've been asked, "What are you trying to accomplish?" Well, I suppose the answer is that this is all hypothetical. It is a combination of these: What if my pet project becomes the most popular web project on earth and I want to distribute it quickly and easily? (hay, a man can dream, right?) It also seems if using PHAR can be done easily, it would be the best way to create a subversion snapshot. Python has this really cool pre-compiling policy, I wonder if PHP has something like that? These libraries seem to do something similar. Will they do that? Hey, these libraries seem pretty neat, but I'd like clarification on the differences as they seem to do the same thing

    Read the article

< Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >