Search Results

Search found 10189 results on 408 pages for 'db 11gr2'.

Page 342/408 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • Optimising RSS parsing on App Engine to avoid high CPU warnings

    - by Danny Tuppeny
    I'm pulling some RSS feeds into a datastore in App Engine to serve up to an iPhone app. I use cron to schedule updating the RSS every x minutes. Each task only parses one RSS feed (which has 15-20 items). I frequently get warnings about high CPU usage in the App Engine dashboard, so I'm looking for ways to optimise my code. Currently, I use minidom (since it's already there on App Engine), but I suspect it's not very efficient! Here's the code: dom = minidom.parseString(urlfetch.fetch(url).content) if dom: items = [] for node in dom.getElementsByTagName('item'): item = RssItem( key_name = self.getText(node.getElementsByTagName('guid')[0].childNodes), title = self.getText(node.getElementsByTagName('title')[0].childNodes), description = self.getText(node.getElementsByTagName('description')[0].childNodes), modified = datetime.now(), link = self.getText(node.getElementsByTagName('link')[0].childNodes), categories = [self.getText(category.childNodes) for category in node.getElementsByTagName('category')] ); items.append(item); db.put(items); def getText(self, nodelist): rc = '' for node in nodelist: if node.nodeType == node.TEXT_NODE: rc = rc + node.data return rc There isn't much going on, but the scripts often take 2-6 seconds CPU time, which seems a bit excessive for looping through 20ish items and reading a few attributes. What can I do to make this faster? Is there anything particularly bad in the above code, or should I change to another way of parsing? Are there are any libraries (that work on App Engine) that would be better, or would I be better parsing the RSS myself?

    Read the article

  • [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified - works

    - by Matt
    Hello, I am developing a java app (with odbc bridge - forgive me - the only paradox driver I have been able to obtain is the microsoft odbc driver) - which works fine while in eclipse, (and netbeans) - connecting and obtaining data from an ancient paradox 5.x database. So long as it is run from inside my IDE - it compiles and runs flawlessly. When I export it to a runable jar, suddenly [code][Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified[/code] occurs. The jar is being run on the same box as my developing IDE - so I am confused about the cause. It is being run via console from a user account, as per the IDE. My connection string is "jdbc:odbc:Driver={Microsoft Paradox Driver (*.db )};DriverID=538; Fil=Paradox 5.X; DefaultDir=C:\paradox\database\location\" - obtained from connectionstrings.com - and as mentioned before, working fine while run from the IDE. The above seems to 'magically' create its own connection, avoiding the setup of a dsn - I am unsure quite how it does - but it works. The only other thing I can think that might be pertinent is that my PC is a 64bit o/s (windows server 2008). Please help, any suggestions or comments will be greatly appreciated. Thanks, Matt

    Read the article

  • Fluent NHibernate auto mapping: map property from another table's column

    - by queen3
    I'm trying to use S#arp architecture... which includes Fluent NHibernate I'm newbie with (and with NHibernate too, frankly speaking). Auto mapping is used. So I have this: public class UserAction : Entity { public UserAction() { } [DomainSignature] [NotNull, NotEmpty] public virtual string Name { get; set; } [NotNull, NotEmpty] public virtual string TypeName { get; private set; } } public class UserActionMap : IAutoMappingOverride<UserAction> { public void Override(AutoMap<UserAction> mapping) { mapping.WithTable("ProgramComponents", m => m.Map(x => x.TypeName)); } } Now, table UserActions references table ProgramComponents (many to one) and I want property UserAction.TypeName to have value from db field ProgramComponents.TypeName. However, the above code fails with NHibernate.MappingException: Duplicate property mapping of TypeName found in OrderEntry3.Core.UserAction As far as I understand the problem is that TypeName is already auto-mapped... but I haven't found a way to remove the automatic mapping. Actually I think that my WithTable/Map mapping has to replace the automatic TypeName mapping, but seems like it does not. I also tried different mapping (names are different but that's all the same): mapping.WithTable("ProgramComponents", m => m.References<ProgramComponent>( x => x.Selector, "ProductSelectorID" ) and still get the same error. I can overcome this with mapping.HasOne<ProgramComponent>(x => x.Selector); but that's not what I exactly wants to do. And I still wonder why the first two methods do not work. I suspect this is because of WithTable.

    Read the article

  • dynamically horizontal scalable key value store

    - by Zubair
    Hi, Is there a key value store that will give me the following: Allow me to simply add and remove nodes and will redstribute the data automatically Allow me to remove nodes and still have 2 extra data nodes to provide redundancy Allow me to store text or images up to 1GB in size Can store small size data up to 100TB of data Fast (so will allow queries to be performed on top of it) Make all this transparent to the client Works on Ubuntu/FreeBSD or Mac Free or open source I basically want something I can use a "single", and not have to worry about having memcached, a db, and several storage components so yes, I do want a database "silver bullet" you could say. Thanks Zubair Answers so far: MogileFS on top of BackBlaze - As far as I can see this is just a filesystem, and after some research it only seems to be appropriate for large image files Tokyo Tyrant - Needs lightcloud. This doesn't auto scale as you add new nodes. I did look into this and it seems it is very fast for queries which fit onto a single node though Riak - This is one I am looking into myself, but I don't have any results yet Amazon S3 - Is anyone using this as their sole persistance layer in production? From what I have seen it seems to be used for storage of images as complex queries are too expensive @shaman suggested Cassandra - definitely one I am looking into So far it seems that there is no database or key value store that fulfills the criteria I mentioned, not even after offering a bounty of 100 points did the question get answered!

    Read the article

  • CQRS - The query side

    - by mattcodes
    A lot of the blogsphere articles related to CQRS (command query repsonsibility) seperation seem to imply that all screens/viewmodels are flat. e.g. Name, Age, Location Of Birth etc.. and thus the suggestion that implementation wise we stick them into fast read source etc.. single table per view mySQL etc.. and pull them out with something like primitive SqlDataReader, kick that nasty nhibernate ORM etc.. However, whilst I agree that domain models dont mapped well to most screens, many of the screens that I work with are more dimensional, and Im sure this is pretty common in LOB apps. So my question is how are people handling screen where by for example it displays a summary of customer details and then a list of their orders with a [more detail] link etc.... I thought about keeping with the straight forward SQL query to the Query Database breaking off the outer join so can build a suitable ViewModel to View but it seems like overkill? Alternatively (this is starting to feel yuck) in CustomerSummaryView table have a text/big (whatever the type is in your DB) column called Orders, and the columns for the Order summary screen grid are seperated by , and rows by |. Even with XML datatype it still feeel dirty. Any thoughts on an optimal practice?

    Read the article

  • VS2008 adding SQL Server Database (SQL Server 2008 Mgmt Studio) not working

    - by Kahn
    I'm trying to practice using the ASP.Net MVC at home, but I ran into an impossible problem. I cannot open a connection to SQL Server 2008, I get this error: "Connections to SQL Server files (*.mdf) require SQL Server Express 2005 to function properly. ..." I've googled around for numerous responses, none of them either working or addressing this issue. I'm running Vista 32bit, my SQL Server 2008 Mgmt Studio is also 32bit, I have SP1 installed both on VS2008 Professional, as well as the SQL Server. I changed the machine.config connectionStrings from ./SQLExpress to my SQL Server 2008 name. Now if I connect manually through web.config, in an asp:datasource or code-behind, everything works fine. But for some reason trying to add a DB Connection directly like this always gets the error. This is pretty fatal, since I can't rightly do much unless I can use LINQ to SQL with my MVC test project, and this is the only way I know how. Worked fine in school and work, but not at home. Installing SQL Server Express 2005, as some have suggested, is not an option. Obviously it HAS to work with SQL Server 2008. Thanks in advance.

    Read the article

  • How to pass binary data between two apps using Content Provider?

    - by Viktor
    I need to pass some binary data between two android apps using Content Provider (sharedUserId is not an option). I would prefer not to pass the data (a savegame stored as a file, small in size < 20k) as a file (ie. overriding openFile()) since this would necessitate some complicated temp-file scheme to cope with concurrency with several content provider accesses and a running game. I would like to read the file into memory under a mutex lock and then pass the binary array in the simplest way possible. How do I do this? It seems creating a file in memory is not a possibility due to the return type of openFile(). query() needs to return a Cursor. Using MatrixCursor is not possible since it applies toString() to all stored objects when reading it. What do I need to do? Implement a custom Cursor? This class has 30 abstract methods. Do I read the file, put it in a SQLite db and return the cursor? The complexity of this seemingly simple task is mindboggling.

    Read the article

  • Having trouble passing text from MySQL to a Javascript function using PHP

    - by Nathan Brady
    So here's the problem. I have data in a MySQL DB as text. The data is inserted via mysql_real_escape_string. I have no problem with the data being displayed to the user. At some point I want to pass this data into a javascript function called foo. // This is a PHP block of code // $someText is text retrieved from the database echo "<img src=someimage.gif onclick=\"foo('{$someText}')\">"; If the data in $someText has line breaks in it like: Line 1 Line 2 Line 3 The javascript breaks because the html output is <img src=someimage.gif onclick="foo('line1 line2 line3')"> So the question is, how can I pass $someText to my javascript foo function while preserving line breaks and carriage returns but not breaking the code? =========================================================================================== After using json like this: echo "<img src=someimage.gif onclick=\"foo($newData)\">"; It is outputting HTML like this: onclick="foo("line 1<br \/>\r\nline 2");"> Which displays the image followed by \r\nline 2");"

    Read the article

  • Database not completely updated in rails migration

    - by Aatish Sai
    I am new to Ruby on Rails. I have a migration called create user class CreateUsers < ActiveRecord::Migration def change create_table :users do |t| t.column :username, :string, :limit => 25, :default => "", :null => false t.column :hashed_password, :string, :limit => 40, :default => "", :null => false t.column :first_name, :string, :limit => 25, :default => "", :null => false t.column :last_name, :string, :limit => 40, :default => "", :null => false t.column :email, :string, :limit => 50, :default => "", :null => false t.column :display_name, :string, :limit => 25, :default => "", :null => false t.column :user_level, :integer, :limit => 3, :default => 0, :null => false end User.create(:username=>'test',:hashed_password=>'test',:first_name=>'test',:last_name=>'test',:email=>'[email protected]',:display_name=> 'test',:user_level=>9) end end When I run rake db:migrate the table is created with the columns as mentioned above but the test data are not there mysql>select * from users; Empty set (0.00 sec) EDIT I just dropped the whole database and restarted the migration and now it is showing the following error. rake aborted! An error has occurred, all later migrations canceled: Can't mass-assign protected attributes: username, hashed_password, first_name, last_name, email, display_name, user_level What am I doing wrong please help? Thank you.

    Read the article

  • Missing UAC shield overlay on desktop shortcut icon when created by msi created from VS 2008

    - by Alain Hogue
    I created a setup program to deploy my VBNet program using Visual Studio 2008. Inside this setup program I created a shortcut to the "primary output" to be installed on the user desktop. Now, everything is working correctly. The program is installed under "C:\Program Files" and the shortcut is created on the desktop. Also, when I use this shortcut I am prompted by UAC to autorize running this program as administrator. So far, so good... But! My desktop icon does not have the UAC shield overlay even if the program is compiled with the manifest stating that it must run as administrator. Also, if I manually create a new shortcut on the desktop to the same executable after the installation, this new shortcut WILL have the shield overlay! I have tried to reboot and delete the iconCache.db file but it did not work. So my question is: How can I have my desktop shortcut appear WITH the UAC shield overlay when installed initially. Thanks!

    Read the article

  • How to connect to SQLServer 2k5 using Ruby 1.8.7 over W2k3 with active record 2.3.5

    - by Luke
    Hi all, sorry for the blast. I'm trying to connect to an SQLServer 2k5 using Ruby 1.8.7 over W2k3 with active record 2.3.5. But, when I ran 'rake migrate' it throws the following: rake migrate --trace Hoe.new {...} deprecated. Switch to Hoe.spec. Invoke migrate (first_time) Invoke environment (first_time) Execute environment Execute migrate rake aborted! no such file to load -- odbc (...) C:/Program Files/test/Rakefile:146 (...) So, my Rakefile in the line 146 says: ActiveRecord::Migrator.migrate('db/migrate', ENV["VERSION"] ? ENV["VERSION"].to_i : nil ) The database.yml has been configured in so many ways without success. I've tried setup to mode in odbc, to configure a system dsn, to completely use the activerecord support for sqlserver but no success at all. The same Rakefile works fine over Postgres and Oracle with the proper gems installed off course. But I cann't get this work. Any help will be appreciated. Thanks in advance!

    Read the article

  • Bloated PDF created by TCPDF

    - by Yogi Yang 007
    In a web app developed in PHP we are generating Quotations and Invoices (which are very simple and of single page) using TCPDF lib. The lib is working just great but it seems to generate very large PDF files. For example in our case it is generating PDF files as large as 4 MB (+/- a few KB). How to reduce this bloating of PDF files generated by TCPDF? Here is code snippet that I am using ob_start(); include('quote_view_bag_pdf.php'); //This file is valid HTML file with PHP code to insert data from DB $quote = ob_get_contents(); //Capture the content of 'quote_view_bag_pdf.php' file and store in variable ob_end_clean(); //Code to generate PDF file for this Quote //This line is to fix a few errors in tcpdf $k_path_url=''; require_once('tcpdf/config/lang/eng.php'); require_once('tcpdf/tcpdf.php'); // create new PDF document $pdf = new TCPDF(); // remove default header/footer $pdf->setPrintHeader(false); $pdf->setPrintFooter(false); // add a page $pdf->AddPage(); // print html formated text $pdf->writeHtml($quote, true, 0, true, 0); //Insert Variables contents here. //Build Out File Name $pdf_out_file = "pdf/Quote_".$_POST['quote_id']."_.pdf"; //Close and output PDF document $pdf->Output($pdf_out_file, 'F'); $pdf->Output($pdf_out_file, 'I'); /////////////// enter code here Hope this code fragment will give some idea?

    Read the article

  • My Zend Framework 'quoting' mess.

    - by tharkun
    I've got a probably very simple issue to which I can't find a satisfactory (subjectively seen) answer in the Zend Framework manual or elsewhere... There are so many ways how I can hand over my php variables to my sql queries that I lost the overview and probably I lack some understanding about quoting in general. Prepared Statements $sql = "SELECT this, that FROM table WHERE id = ? AND restriction = ?"; $stmt = $this->_db->query($sql, array($myId, $myValue)); $result = $stmt->fetchAll(); I understand that with this solution I don't need to quote anything because the db handles this for me. Querying Zend_Db_Table and _Row objects over the API $users = new Users(); a) $users->fetchRow('userID = ' . $userID); b) $users->fetchRow('userID = ' . $users->getAdapter()->quote($userID, 'INTEGER')); c) $users->fetchRow('userID = ?', $userID); d) $users->fetchRow('userID = ?', $users->getAdapter()->quote($userID, 'INTEGER')); Questions I understand that a) is not ok because it's not quoted at all. But what about the other versions, what's the best? Is c) being treated like a statement and automatically quoted or do I need to use d) when I use the ? identifier?

    Read the article

  • WYSIHAT Photos upload not working - Paperclip - Ruby on Rails

    - by bgadoci
    I just successfully installed WysiHat in my rails blog. Seems that the 'add a picture' feature is not working. It successfully allows me to find and select a picture from my desktop but upon clicking save, it does nothing. I also have Paperclip successfully installed. I am wondering if this may have something to do with it. Perhaps Paperclip is getting in the way, or, perhaps I need to connect Paperclip and WysiHat somehow. Any ideas? (let me know if I need to post any code). Also, WysiHat-engine uses facebox, not sure if that is relevant. UPDATE: Added Server Log, looks like paperclip is saving it so not sure what else is going wrong. Processing PostsController#update (for 127.0.0.1 at 2010-04-23 16:42:14) [PUT] Parameters: {"commit"=>"Update", "post"=>{"body"=>"<p>Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi. Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat.</p>", "title"=>"Rails Code for Search"}, "authenticity_token"=>"hndm6pxaPLfgnSMFAmLDGNo86mZG3XnlfJoNOI/P+O8=", "id"=>"105"} Post Load (0.2ms) SELECT * FROM "posts" WHERE ("posts"."id" = 105) Post Update (0.3ms) UPDATE "posts" SET "updated_at" = '2010-04-23 21:42:14', "body" = '<p>Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi. Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat.</p>' WHERE "id" = 105 [paperclip] Saving attachments. Redirected to http://localhost:3000/posts/105 Completed in 12ms (DB: 0) | 302 Found [http://localhost/posts/105]

    Read the article

  • How do you find a functions virtual call address in assembly?

    - by Daniel
    I've googled around but i'm not sure i am asking the right question or not and i couldn't find much regardless, perhaps a link would be helpful. I made a c++ program that shows a message box, then I opened it up with Ollydbg and went to the part where it calls MessageBoxW. The call address of MessageBoxW changes each time i run the app as windows is updating my Imports table to have the correct address of MessageBoxW. So my question is how do i find the virtual addres of MessageBoxW to my imports table and also how can i use this in ollydbg? Basically I'm trying to make a code cave in assembly to call MessageBoxW again. I got fairly close once by searching the executable with a hex editor and found the position of the call, and I think I found the virtual address. But when i call that virtual address in olly and saved it to the executable, the next time i opened it the call was replaced with a bunch of DB xyz (which looked like the virtual address but why did the call get removed? Sorry if my terminology is off as i'm new to this so i'm not quite sure what to call things.

    Read the article

  • Cucumber could not find table; but its there. What is going on?

    - by JZ
    I'm working with cucumber and I'm running into difficulties. When I run "cucumber features", I am met with errors, cucumber is unable to find my requests table. What obvious mistake am I making? Thank you in advance! Bash: justin-zollarss-mac-pro:conversion justinz$ cucumber features Using the default profile... /Users/justinz/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement F-- (::) failed steps (::) Could not find table 'requests' (ActiveRecord::StatementInvalid) ./features/article_steps.rb:3 ./features/article_steps.rb:2:in `each' ./features/article_steps.rb:2:in `/^I have requests named (.+)$/' features/manage_articles.feature:7:in `Given I have requests named Foo, Bar' Failing Scenarios: cucumber features/manage_articles.feature:6 # Scenario: Conversion 1 scenario (1 failed) 3 steps (1 failed, 2 skipped) 0m0.154s justin-zollarss-mac-pro:conversion justinz$ Manage_articles.feature: Feature: Manage Articles In order to make sales As a customer I want to make conversions Scenario: Conversion Given I have requests named Foo, Bar When I go to the list of customers Then I should see a new "customer" Article_steps.rb: Given /^I have requests named (.+)$/ do |firsts| firsts.split(', ').each do |first| Request.create!(:first => first) pending # express the regexp above with the code you wish you had end end Then /^I should see a new "([^"]*)"$/ do |arg1| pending # express the regexp above with the code you wish you had end DB schema: ActiveRecord::Schema.define(:version => 20100528011731) do create_table "requests", :force => true do |t| t.string "institution" t.string "website" t.string "type" t.string "users" t.string "first" t.string "last" t.string "jobtitle" t.string "phone" t.string "email" t.datetime "created_at" t.datetime "updated_at" end end

    Read the article

  • Place Query Results into Array then Implode?

    - by jason
    Basically I pull an Id from table1, use that id to find a site id in table2, then need to use the site ids in an array, implode, and query table3 for site names. I cannot implode the array correctly first I got an error, then used a while loop. With the while loop the output simply says: Array $mysqli = mysqli_connect("server", "login", "pass", "db"); $sql = "SELECT MarketID FROM marketdates WHERE Date = '2010-04-04 00:00:00' AND VenueID = '2'"; $result = mysqli_query($mysqli, $sql) or die(mysqli_error($mysqli)); $dates_id = mysqli_fetch_assoc ( $result ); $comma_separated = implode(",", $dates_id); echo $comma_separated; //This Returns 79, which is correct. $sql = "SELECT SIteID FROM bookings WHERE BSH_ID = '1' AND MarketID = '$comma_separated'"; $result = mysqli_query($mysqli, $sql) or die(mysqli_error($mysqli)); // This is where my problems start $SIteID = array(); while ($newArray = mysqli_fetch_array($result, MYSQLI_ASSOC)) { $SIteID[] = $newArray[SIteID]; } $locationList = implode(",",$SIteID); ?> Basically what I need to do is correctly move the query results to an array that I can implode and use in a 3rd query to pull names from table3.

    Read the article

  • Synchronizing ASP.NET MVC action methods with ReaderWriterLockSlim

    - by James D
    Any obvious issues/problems/gotchas with synchronizing access (in an ASP.NET MVC blogging engine) to a shared object model (NHibernate, but it could be anything) at the Controller/Action level via ReaderWriterLockSlim? (Assume the object model is very large and expensive to build per-request, so we need to share it among requests.) Here's how a typical "Read Post" action would look. Enter the read lock, do some work, exit the read lock. public ActionResult ReadPost(int id) { // ReaderWriterLockSlim allows multiple concurrent writes; this method // only blocks in the unlikely event that some other client is currently // writing to the model, which would only happen if a comment were being // submitted or a new post were being saved. _lock.EnterReadLock(); try { // Access the model, fetch the post with specificied id // Pseudocode, etc. Post p = TheObjectModel.GetPostByID(id); ActionResult ar = View(p); return ar; } finally { // Under all code paths, we must release the read lock _lock.ExitReadLock(); } } Meanwhile, if a user submits a comment or an author authors a new post, they're going to need write access to the model, which is done roughly like so: [AcceptVerbs(HttpVerbs.Post)] public ActionResult SaveComment(/* some posted data */) { // try/finally omitted for brevity _lock.EnterWriteLock(); // Save the comment to the DB, update the model to include the comment, etc. _lock.ExitWriteLock(); } Of course, this could also be done by tagging those action methods with some sort of "synchronized" attribute... but however you do it, my question is is this a bad idea? ps. ReaderWriterLockSlim is optimized for multiple concurrent reads, and only blocks if the write lock is held. Since writes are so infrequent (1000s or 10,000s or 100,000s of reads for every 1 write), and since they're of such a short duration, the effect is that the model is synchronized , and almost nobody ever locks, and if they do, it's not for very long.

    Read the article

  • Software Update Notifications

    - by devio
    I am considering implementing some sort of Software Update Notification for one of the web applications I am developing. There are several questions I came across: Should the update check be executed on the client or on the server? Client-side means, the software retrieves the most current version information, performs its checks, and displays the update information. Server-side check means the software sends its version info to the server, which in turn does the calculations and returns information to the client. My guess is that server-side implementation may turn out to be more flexible and more powerful than client-side, as I can add functionality to the server easily, as long as the client understands it. Where should the update info be displayed? Is it ok to display on the login screen? Should only admins see it? (this is a web app with a database, so updating requires manipulation of db and web, which is only done by admins). What about a little beeping flashing icon which increases in size as the version gets more obsolete every day ;) ? Privacy issues Not everybody likes to have their app usage stats broadcast over the internet. TheOnion question: What do you think?

    Read the article

  • Kohana 3, themes outside application.

    - by Marek
    Hi all I read http://forum.kohanaframework.org/comments.php?DiscussionID=5744&page=1#Item_0 and I want to use similar solution, but with db. In my site controller after(): $theme = $page->get_theme_name(); //Orange Kohana::set_module_path('themes', Kohana::get_module_path('themes').'/'.$theme); $this->template = View::factory('layout') I checked with firebug: fire::log(Kohana::get_module_path('themes')); // D:\tools\xampp\htdocs\kohana\themes/Orange I am sure that path exists. I have directly in 'Orange' folder 'views' folder with layout.php file. But I am getting: The requested view layout could not be found Extended Kohana_Core is just: public static function get_module_path($module_key) { return self::$_modules[$module_key]; } public static function set_module_path($module_key, $path) { self::$_modules[$module_key] = $path; } Could anybody help me with solving that issue? Maybe it is a .htaccess problem: # Turn on URL rewriting RewriteEngine On # Put your installation directory here: # If your URL is www.example.com/kohana/, use /kohana/ # If your URL is www.example.com/, use / RewriteBase /kohana/ # Protect application and system files from being viewed RewriteCond $1 ^(application|system|modules) # Rewrite to index.php/access_denied/URL RewriteRule ^(.*)$ / [PT,L] RewriteRule ^(media) - [PT,L] RewriteRule ^(themes) - [PT,L] # Allow these directories and files to be displayed directly: # - index.php (DO NOT FORGET THIS!) # - robots.txt # - favicon.ico # - Any file inside of the images/, js/, or css/ directories RewriteCond $1 ^(index\.php|robots\.txt|favicon\.ico|static) # No rewriting RewriteRule ^(.*)$ - [PT,L] # Rewrite all other URLs to index.php/URL RewriteRule ^(.*)$ index.php/$1 [PT,L] Could somebody help? What I am doing wrong? Regards

    Read the article

  • Why does my data not pass into my view correctly?

    - by dmanexe
    I have a model, view and controller not interacting correctly, and I do not know where the error lies. First, the controller. According to the Code Igniter documentation, I'm passing variables correctly here. function view() { $html_head = array( 'title' => 'Estimate Management' ); $estimates = $this->Estimatemodel->get_estimates(); $this->load->view('html_head', $html_head); $this->load->view('estimates/view', $estimates); $this->load->view('html_foot'); } The model (short and sweet): function get_estimates() { $query = $this->db->get('estimates')->result(); return $query; } And finally the view, just to print the data for initial development purposes: <? print_r($estimates); ?> Now it's undefined when I navigate to this page. However, I know that $query is defined, because it works when I run the model code directly in the view.

    Read the article

  • JPA - Setting entity class property from calculated column?

    - by growse
    I'm just getting to grips with JPA in a simple Java web app running on Glassfish 3 (Persistence provider is EclipseLink). So far, I'm really liking it (bugs in netbeans/glassfish interaction aside) but there's a thing that I want to be able to do that I'm not sure how to do. I've got an entity class (Article) that's mapped to a database table (article). I'm trying to do a query on the database that returns a calculated column, but I can't figure out how to set up a property of the Article class so that the property gets filled by the column value when I call the query. If I do a regular "select id,title,body from article" query, I get a list of Article objects fine, with the id, title and body properties filled. This works fine. However, if I do the below: Query q = em.createNativeQuery("select id,title,shorttitle,datestamp,body,true as published, ts_headline(body,q,'ShortWord=0') as headline, type from articles,to_tsquery('english',?) as q where idxfti @@ q order by ts_rank(idxfti,q) desc",Article.class); (this is a fulltext search using tsearch2 on Postgres - it's a db-specific function, so I'm using a NativeQuery) You can see I'm fetching a calculated column, called headline. How do I add a headline property to my Article class so that it gets populated by this query? So far, I've tried setting it to be @Transient, but that just ends up with it being null all the time.

    Read the article

  • Am I correctly extracting JPEG binary data from this mysqldump?

    - by Glenn
    I have a very old .sql backup of a vbulletin site that I ran around 8 years ago. I am trying to see the file attachments that are stored in the DB. The script below extracts them all and is verified to be JPEG by hex dumping and checking the SOI (start of image) and EOI (end of image) bytes (FFD8 and FFD9, respectively) according to the JPEG wiki page. But when I try to open them with evince, I get this message "Error interpreting JPEG image file (JPEG datastream contains no image)" What could be going on here? Some background info: sqldump is around 8 years old vbulletin 2.x was the software that stored the info most likely php 4 was used most likely mysql 4.0, possibly even 3.x the column datatype these attachments are stored in is mediumtext My Python 3.1 script: #!/usr/bin/env python3.1 import re trim_l = re.compile(b"""^INSERT INTO attachment VALUES\('\d+', '\d+', '\d+', '(.+)""") trim_r = re.compile(b"""(.+)', '\d+', '\d+'\);$""") extractor = re.compile(b"""^(.*(?:\.jpe?g|\.gif|\.bmp))', '(.+)$""") with open('attachments.sql', 'rb') as fh: for line in fh: data = trim_l.findall(line)[0] data = trim_r.findall(data)[0] data = extractor.findall(data) if data: name, data = data[0] try: filename = 'files/%s' % str(name, 'UTF-8') ah = open(filename, 'wb') ah.write(data) except UnicodeDecodeError: continue finally: ah.close() fh.close() update The JPEG wiki page says FF bytes are section markers, with the next byte indicating the section type. I see some that are not listed in the wiki page (specifically, I see a lot of 5C bytes, so FF5C). But the list is of "common markers" so I'm trying to find a more complete list. Any guidance here would also be appreciated.

    Read the article

  • Lightweight development web server with support for PHP v2

    - by David
    In line with this question: http://stackoverflow.com/questions/171655/lightweight-web-app-server-for-php The above question has been asked numerous times and answered exactly the same in all the cases I've found using google. My question is similar to a degree but with a different desired goal: On demand development instances. I have come up with a somewhat questionable solution to host arbitrary directories in my user account for the purpose of development testing. I am not interested in custom vhosts but looking to emulate the behaviour I get when using paster or mongrel for Python & Ruby respectively. Ubuntu 9.10 TOXIC@~/ APACHE_RUN_USER=$USER APACHE_RUN_GROUP=www-data apache2 -d ~/Desktop/ -c "Listen 2990" Is there a better solution, could I do something similar with nginix or lighttpd? Note: The above won't work correctly for stock environments without a copied & altered httpd.conf. Update: The ideal goal is to mimic Paster, Webbrick, and Mongrel for rapid local development hosting. For those light weight servers, it takes less then a minute to get a working instance running ( not factoring any DB support ). Apache2 vhost is great but I've been using Apache2 for over ten years and it would be some sort of abomination hack to setup a new entry in /etc/hosts unless you have your own DNS, in which case a wildcard subdomain setup would probably work great. EXCEPT one more problem, it's pretty easy for me to know what is being hosted ( ex. by paster or mongeral ) just doing a sudo netstat -tulpn while there would be a good possibility of confusion in figure out which vhost is what.

    Read the article

  • How to estimate size of data to transfer when using DbCommand.ExecuteXXX?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >