Search Results

Search found 21098 results on 844 pages for 'model import'.

Page 598/844 | < Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >

  • PGP Command Line Decryption --- How to decrypt file?

    - by whitman6732
    I was sent a public key in order to decrypt a pgp-encrypted file. I imported the key with: gpg --import publickey.asc And verified it with gpg --list-keys Now, I'm trying to decrypt the file. I put the passphrase in a file called pass.txt and ran this at the command line: gpg --passphrase-fd ../../pass.txt --decrypt encryptedfile.txt.pgp --output encryptedfile.txt All it says is: Reading passphrase from file descriptor 0 ... And doesn't seem to be doing anything else. I can't tell if it's hanging or not. Is it a relatively quick process? The file is large ( about 2 GB ). Is the syntax for it correct?

    Read the article

  • Usng Rails ActiveRecord relationships

    - by Brian Goff
    I'm a newbie programmer, been doing shell scripting for years but have recently taken on OOP programming using Ruby and am creating a Rails application. I'm having a hard time getting my head wrapped around how to use my defined model relationships. I've tried searching Google, but all I can come up with are basically cheat sheets for what has_many, belongs_to, etc all mean. This stuff is easy to define & understand, especially since I've done a lot of work directly with SQL. What I don't understand is how to actually used those defined relationships. In my case I have 3 models: Locations Hosts Services Relationships (not actual code, just for shortening it): Services belongs_to :hosts Hosts has_many :services belongs_to :locations Locations has_many :hosts In this case I want to be able to display a column from Locations while working with Services. In SQL this is a simple join, but I want to do it the Rails/Ruby way, and also not use SQL in my code or redefine my joins.

    Read the article

  • Problem fetching contacts from Yahoo! Address Book using PHP's CURL.

    - by Ravi
    Hi I had to get the user's yahoo address book using PHP's CURL when user gave login name and password. It was working fine. Address book has been got as CSV format. But now suddenly things are stop working. I am just getting some yahoo's html code instead of CSV format. I am guessing that yahoo is somehow restricted fetching address book using CURL. I did one experiment that I manually did the import contacts from Yahoo service. Before importing contacts yahoo shown the CAPTCHA to verify. I guess this CAPTCHA mechanism is recently added. Is this CAPTCHA mechanism preventing to get the address book when I am using PHP's CURL? Actually I do not want get address book using Yahoo OAuth or BBAuth. Any one have idea?

    Read the article

  • Why can't I download a whole image file with urllib2.urlopen()

    - by John Gann
    When I run the following code, it only seems to be downloading the first little bit of the file and then exiting. Occassionally, I will get a 10054 error, but usually it just exits without getting the whole file. My internet connection is crappy wireless, and I often get broken downloads on larger files in firefox, but my browser has no problem getting a 200k image file. I'm new to python, and programming in general, so I'm wondering what nuance I'm missing. import urllib2 xkcdpic=urllib2.urlopen("http://imgs.xkcd.com/comics/literally.png") xkcdpicfile=open("C:\\Documents and Settings\\John Gann\\Desktop\\xkcd.png","w") while 1: chunk=xkcdpic.read(4028) if chunk: print chunk xkcdpicfile.write(chunk) else: break

    Read the article

  • Rails routes creating additional info in URL

    - by Danny McClelland
    Hi Everyone, Say if I have a model called 'deliver' and I am using the default URL route of: # Install the default routes as the lowest priority. map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' So the deliver URL would be: http://localhost:3000/deliver/123 What I am trying to work out, is how to use another field from the database alongside or instead of the ID. For example. If I have a field in the create view called 'deliveraddress', how do I put that into the routes? So I can have something link this: http://localhost:3000/deliver/deliveraddress Thanks, Danny

    Read the article

  • Twitter Oauth Strategy with Warden + Devise Authentication Gems for Ruby

    - by Michael Waxman
    Devise, the authentication gem for Ruby based on Warden (another auth gem) does not support Twitter Oauth as an authentication strategy, BUT Warden does. There is a way to use the Warden Twitter Oauth strategy within Devise, but I cannot figure it out. I'm using the following block in the devise config file: config.warden do |manager| manager.oauth(:twitter) do |twitter| twitter.consumer_secret = <SECRET> twitter.consumer_key = <KEY> twitter.options :site => 'http://twitter.com' end manager.default_strategies.unshift :twitter_oauth end But I keep on getting all sorts of error messages. Does anyone know how to make this work? I'm assuming there is more to do here (configuring a new link/route to talk to Warden, maybe adding attributes to the Devise User model, etc.), but I can't figure out what they are. Please help.

    Read the article

  • ASP.NET MVC 2 Post AJAX from from JavaScript

    - by ANDyW
    Hi, I got control with strongly typed View, with Ajax.BeginForm(). Now I would like to change submit method from <input type="submit" id="testClick" value="Submit" /> To some javascript method DoSubmit(). What I tried is : Invoke click on that submit button Invoke submit on form ('form1').submit(), document.forms['form1'].submit() jQuery forms with ('form1').AjaxSubmit(); Create jQuery AJAX $.ajax({ type: "POST", url: $("#form1").attr("action"), data: $("#form1").serialize(), success: function() { alert("epic win!!!1!1!") }, error: function(XMLHttpRequest, textStatus, errorThrown) { alert("epic fail!") } }); All those method created normal request (not AJAX), or they didn't work. So anyone know how I can do AJAX submit "Form", from JavaScript and strongly typed mechanism (public AcrionResult MyFormAction(FormModel model); ) will work?

    Read the article

  • general database modeling and django specific modeling

    - by Shreko
    I'm wondering what is the best way to model something like the following. Lets say my company sells metal bars (parameters/fields are: length, profile_type, quantity etc.) of different profiles, where profiles may be pipe(pipe_diameter, wall_thickness) or hollow_rectangle(base, height, wall_thickness), or maybe some other profile with different parameters. Lets say maximum number of profiles would be 12, each profile having between 2-5 parameters. Should everything be in a single table like table_bars: id, length, quantity, profile_type, pipe_diameter, wall_thickness, base, height, etc.) where profile type would be (pipe, rectangle etc.) or should every shape have its own table with its own parameters and in table_bars keep only id, length, quantity profile_type and profile_id) and are there any django specific issues is multiple tables are the best answer? Thanks

    Read the article

  • will_paginate undefined method error - Ruby on Rails

    - by bgadoci
    I just installed the gem for will_paginate and it says that it was installed successfully. I followed all the instructions listed with the plugin and I am getting an 'undefined method `paginate' for' error. Can't find much in the way of Google search and haven't been able to fix it myself (obviously). Here is the code: PostsController def index @tag_counts = Tag.count(:group => :tag_name, :order => 'updated_at DESC', :limit => 10) @posts = Post.paginate :page => params[:page], :per_page => 50 respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end /model/post.rb class Post < ActiveRecord::Base validates_presence_of :body, :title has_many :comments, :dependent => :destroy has_many :tags, :dependent => :destroy cattr_reader :per_page @@per_page = 10 end /posts/views/index.html.erb <%= will_paginate @posts %>

    Read the article

  • OpenCV 2.0 and Python

    - by Jive Dadson
    I cannot get the example Python programs to run. When executing the Python command "from opencv import cv" I get the message "ImportError: No module named _cv". There is a stale _cv.pyd in the site-packages directory, but no _cv.py anywhere. See step 5 below. MS Windows XP, VC++ 2008, Python 2.6, OpenCV 2.0 Here's what I have done. Downloaded and ran the MS Windows installer for OpenCV2.0. Downloaded and installed CMake Downloaded and installed SWIG Ran CMake. After unchecking "ENABLE_OPENMP" in the CMake GUI, I was able to build OpenCV using INSTALL.vcproj and BUILD_ALL.vcproj. I do not know what the difference is, so I built everything under both of those project files. The C example programs run fine. Copied contents of OpenCV2.0/Python2.6/lib/site-packages to my installed Python2.6/lib/site-packages directory. I notice that it contains an old _cv.pyd and an old libcv.dll.a.

    Read the article

  • Using Struts with SpringSource dm server gives Http Status 503 error

    - by komal
    Hi, I developed an enterprise application using spring, struts and hibernate, now I want to transfer it to work with OSGi dm server. I found a book "Pro SpringSource dm server" where the author has explained a way to migrate WAR to OSGi bundles. I successfully migrated the application given in book. The first step of migration says that remove the lib folder in WEB-INF directory and import all the relevant packages. I did the same. Application has been successfully deployed in the dm server. But when I tried to connect to URL it is giving me the error: SpringSource dm Server - Error report HTTP Status 503 - Servlet action is currently unavailable type: Status report message: Servlet action is currently unavailable description: The requested service (Servlet action is currently unavailable) is not currently available. What could be the reason for this? I am out of clues to solve this issue. Can you pass any help you may have. Thanks in anvance.

    Read the article

  • In Javascript event handling, why "return false" or "event.preventDefault()" and "stopping the event

    - by Jian Lin
    It is said that when we handle a "click event", returning false or calling event.preventDefault() makes a difference, in which the difference is that preventDefault will only prevent the default event action to occur, i.e. a page redirect on a link click, a form submission, etc. and return false will also stop the event flow. Does that mean, if the click event is registered several times for several actions, using $('#clickme').click(function() { … }) returning false will stop the other handlers from running? I am on a Mac now and so can only use Firefox and Chrome but not IE, which has a different event model, and tested it on FF and Chrome and all 3 handlers ran without any stopping…. so what is the real difference, or, is there a situation where "stopping the event flow" is not desirable? this is related to http://stackoverflow.com/questions/3042036/using-jquerys-animate-if-the-clicked-on-element-is-a-href-a and http://stackoverflow.com/questions/2017755/whats-the-difference-between-e-preventdefault-and-return-false

    Read the article

  • How to sync iPhone and Mac CoreData objects through bonjour?

    - by monotreme
    I know similar questions have been asked before. I'm using the Sync Demo app I found online here, which uses Picture Sharing as a guide. I've integrated it into my desktop and iphone apps and have the connection working, but am clueless as to how to actually sync my objects. Is it as simple as if ([iphone Object] != [desktop object]) { //merge the two } I have the exact same object model used, I just basically want to know how to check if there are differences, and copy the ones that are different over. Anyone know of any sample code anywhere that would show this? Thanks so much.

    Read the article

  • DELETING doubled users (MySQL)

    - by vizzdoom
    Hi I have two tables. There are users informations from two sites: p_users p_users2 There are 3726 users in first and 13717 in second. Some users in p_users2 are in p_users. I want merge this two tables to the one big table - but rows with same usernames can't be doubled. How can I do this? I tried something like this: DELETE FROM p_users2 WHERE user_id IN ( select p.user_id from p_users p join p_users2 p2 on p.username=p2.username ) After that I should receive a table with unique usernames, which I want to export and import to the first one. But when I execute my query I got error: SQL Error (1093): You can't specify target table 'p_users2' for update in FROM clause. (MYSQL)

    Read the article

  • MongoMapper, Rails, Increment Works in Console but not Controller

    - by Michael Waxman
    I'm using mongo_mapper 0.7.5 and rails 2.3.8, and I have an instance method that works in my console, but not in the controller of my actual app. I have no idea why this is. #controller if @friendship.save user1.add_friend(user2) ... #model ... key :friends_count, Integer, :default => 0 key :followers_count, Integer, :default => 0 def add_friend(user) ... self.increment(:friends_count => 1) user.increment(:followers_count => 1) true end And this works in the console, but in the controller it does not change the follower count, only the friends count. What gives? The only thing I can even think of is that the way that I'm passing the user in is the problem, but I'm not sure how to fix it.

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • How do you fix TSD02016 error in Database

    - by Keith Sirmons
    Howdy, I have a database I am using the Visual Studio 2010 Database Project tool vsdbcmd.exe to create a schema from. vsdbcmd /a:Import /dsp:Sql /model:"Database" /cs:"Server=SqlServer; Initial Catalog=DatabaseName; Integrated Security=SSPI;" The tool is reporting an error: Error TSD02016, Gen-259 (12,50) The column name is not valid. No table name was specified. How would I go about pinpointing where this error is originating? I have found one resource on the internet (http://social.msdn.microsoft.com....) that points to a possibility of a keyword used incorrectly, but the error messages are not the same. What is Gen-259? Thank you, Keith

    Read the article

  • SyntaxError using gdata-python-client to access Google Book Search Data API

    - by isbadawi
    >>> import gdata.books.service >>> service = gdata.books.service.BookService() >>> results = service.search_by_keyword(isbn='0434003484') Traceback (most recent call last): File "<pyshell#4>", line 1, in <module> results = service.search_by_keyword(isbn='0434003484') ... snip ... File "C:\Python26\lib\site-packages\atom\__init__.py", line 127, in CreateClassFromXMLString tree = ElementTree.fromstring(xml_string) File "<string>", line 85, in XML SyntaxError: syntax error: line 1, column 0 This is a minimal example -- in particular, the book service unit tests included in the package also fail with the exact same error. I've looked at the wiki and open issue tickets on Google Code to no avail (and this seems to me more apt to be a silly error on my end rather than a problem with the library). I'm not sure how to interpret the error message. If it matters, I'm using python 2.6.5.

    Read the article

  • Bourne Script: Redirect success messages but NOT error messages

    - by sixtyfootersdude
    This command: keytool -import -file "$serverPath/$serverCer" -alias "$clientTrustedCerAlias" -keystore "$clientPath/$clientKeystore" -storepass "$serverPassword" -noprompt Will when it runs successfully outputs: Certificate was added to keystore I tried redirecting the stdard out with: keytool ... > /dev/null But it is still printing. It appears that the message is being output into standard error. Since when I do this it is not displayed: keytool ... > /dev/null 2>&1 However this is not what I am wanting to do. I would like error messages to be output normally but I do not want "success" messages to be output to the command line. Any ideas? Whatever happened to unix convention: "If it works do not output anything".

    Read the article

  • Eclipse keyboard shortcuts: "alt+shift+" vs. "shift+" vs. "ctrl+alt+" etc. -- Is there an underlying

    - by MatrixFrog
    There are a zillion questions on SO about keyboard shortcuts in Eclipse, but there's I've always wondered if there is an underlying logic to the decisions of which shortcuts would be ctrl+alt+[some letter], and which would be just ctrl+[some letter] etc. Obviously there is a need to use a variety of combinations because there are only so many keys on the keyboard, but why, for example, is "add import" ctrl+shift+m, while "extract method" is alt+shift+m, instead of the other way around? I think if there is some underlying logic to these decisions, it will make it easier to remember more shortcuts without having to scan through the huge right-click menus to find them, and I won't accidentally use the wrong one as often.

    Read the article

  • Large file upload into WSS v3

    - by Rubens Farias
    I'd built an WSSv3 application which upload files in small chunks; when every data piece arrives, I temporarly keep it into a SQL 2005 image data type field for performance reasons**. Problem come when upload ends; I need to move data from my SQL Server to Sharepoint Document Library through WSSv3 object model. Right now, I can think two approaches: SPFileCollection.Add(string, (byte[])reader[0]); // OutOfMemoryException and SPFile file = folder.Files.Add("filename", new byte[]{ }); using(Stream stream = file.OpenBinaryStream()) { // ... init vars and stuff ... while ((bytes = reader.GetBytes(0, offset, buffer, 0, BUFFER_SIZE)) 0) { stream.Write(buffer, 0, (int)bytes); // Timeout issues } file.SaveBinary(stream); } Are there any other way to complete successfully this task? ** Performance reasons: if you tries to write every chunk directly at Sharepoint, you'll note a performance degradation as file grows up (100Mb).

    Read the article

  • Java - When to use Iterators?

    - by Walter White
    Hi all, I am trying to better understand when I should and should not use Iterators. To me, whenever I have a potentially large amount of data to iterate through, I write an Iterator for it. If it also lends itself to the Iterator interface, then it seems like a win. I was reading a little bit that there is a lot of overhead with using an Iterator. A good example of where I used an Iterator was to iterate through a bunch of SQL scripts to execute one query at a time, reading it in, then executing it. Is there another performance trade off I should be aware of? Before I used iterators, I would read the entire String of SQL commands to execute into an ArrayList, and the iterate through that. If the import is rather large (like for geolocation data, then the server tends to get bogged down). Walter

    Read the article

  • Urllib's urlopen breaking on some sites (e.g. StackApps api)

    - by Edan Maor
    I'm using urllib2's urlopen function to try and get a JSON result from the StackOverflow api. The code I'm using: >>> import urllib2 >>> conn = urllib2.urlopen("http://api.stackoverflow.com/0.8/users/") >>> conn.readline() The result I'm getting: '\x1f\x8b\x08\x00\x00\x00\x00\x00\x04\x00\xed\xbd\x07`\x1cI\x96%&/m\xca{\x7fJ\... I'm fairly new to urllib, but this doesn't seem like the result I should be getting. I've tried it in other places and I get what I expect (the same as visiting the address with a browser gives me: a JSON object). Using urlopen on other sites (e.g. "http://google.com") works fine, and gives me actual html. I've also tried using urllib and it gives the same result. I'm pretty stuck, not even knowing where to look to solve this problem. Any ideas?

    Read the article

  • Why is the user.dir system property working in Java?

    - by Geo
    Almost every article I read told me that you can't have chdir in Java. The accepted answer to this question says you can't do it in Java. However, here's some of the stuff I tried : geo@codebox:~$ java -version java version "1.6.0_14" Java(TM) SE Runtime Environment (build 1.6.0_14-b08) Java HotSpot(TM) Client VM (build 14.0-b16, mixed mode, sharing) Here's a test class I'm using : import java.io.*; public class Ch { public static void main(String[] args) { System.out.println(new File(".").getAbsolutePath()); System.setProperty("user.dir","/media"); System.out.println(new File(".").getAbsolutePath()); } } geo@codebox:~$ pwd /home/geo geo@codebox:~$ java Ch /home/geo/. /media/. Please explain why this worked. Can I use this from now on and expect it to work the same way on all platforms?

    Read the article

  • One admin for multiple sites

    - by valya
    I have two sites with different SITE_IDs, but I want to have only one admin interface for both sites. I have a model, which is just an extended FlatPage: # models.py class SFlatPage(FlatPage): currobjects = CurrentSiteManager('sites') galleries = models.ManyToManyField(Gallery) # etc # admin.py class SFlatPageAdmin(FlatPageAdmin): fieldsets = None admin.site.register(SFlatPage, SFlatPageAdmin) admin.site.unregister(FlatPage) I don't know why, but there are only pages for current site in admin interface. On http://site1.com/admin/ I see flatpages for site1, on http://site2.com/admin/ I see flatpages for site2. But I want to see all pages in http://site1.com/admin/ interface! What am I doing wrong?

    Read the article

< Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >