Search Results

Search found 13501 results on 541 pages for 'dimensional model'.

Page 307/541 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • Use a vector to index a matrix without linear index

    - by David_G
    G'day, I'm trying to find a way to use a vector of [x,y] points to index from a large matrix in MATLAB. Usually, I would convert the subscript points to the linear index of the matrix.(for eg. Use a vector as an index to a matrix in MATLab) However, the matrix is 4-dimensional, and I want to take all of the elements of the 3rd and 4th dimensions that have the same 1st and 2nd dimension. Let me hopefully demonstrate with an example: Matrix = nan(4,4,2,2); % where the dimensions are (x,y,depth,time) Matrix(1,2,:,:) = 999; % note that this value could change in depth (3rd dim) and time (4th time) Matrix(3,4,:,:) = 888; % note that this value could change in depth (3rd dim) and time (4th time) Matrix(4,4,:,:) = 124; Now, I want to be able to index with the subscripts (1,2) and (3,4), etc and return not only the 999 and 888 which exist in Matrix(:,:,1,1) but the contents which exist at Matrix(:,:,1,2),Matrix(:,:,2,1) and Matrix(:,:,2,2), and so on (IRL, the dimensions of Matrix might be more like size(Matrix) = (300 250 30 200) I don't want to use linear indices because I would like the results to be in a similar vector fashion. For example, I would like a result which is something like: ans(time=1) 999 888 124 999 888 124 ans(time=2) etc etc etc etc etc etc I'd also like to add that due to the size of the matrix I'm dealing with, speed is an issue here - thus why I'd like to use subscript indices to index to the data. I should also mention that (unlike this question: Accessing values using subscripts without using sub2ind) since I want all the information stored in the extra dimensions, 3 and 4, of the i and jth indices, I don't think that a slightly faster version of sub2ind still would not cut it..

    Read the article

  • WCF web service: response is 200/ok, but response body is empty

    - by user1021224
    I am creating a WCF web api service. My problem is that some methods return a 200/OK response, but the headers and the body are empty. In setting up my web service, I created an ADO.NET Entity Data Model. I chose ADO.NET DbContext Generator when I added a code generation item. In the Model.tt document, I changed HashSet and ICollection to List. I built my website. It used to be that when I coded a method to return a List of an entity (like List<Customer> or List<Employee> in the Northwind database), it worked fine. Over time, I could not return a List of any of those, and could only grab one entity. Now, it's gotten to a point where I can return a List<string> or List<int>, but not a List or an instance of any entity. When I try to get a List<AnyEntity>, the response is 200/OK, but the response headers and body are empty. I have tried using the debugger and Firefox's Web Console. Using FF's WC, I could only get an "undefined" status code. I am not sure where to go from here. EDIT: In trying to grab all Areas from the database, I do this: [WebGet(UriTemplate = "areas")] public List<a1Areas> AllAreas() { return context.a1Areas.ToList(); } I would appreciate any more methods for debugging this. Thanks in advance. Found the answer, thanks to Merlyn! In my Global.asax file, I forgot to comment out two lines that took care of proxies and disposing of my context object. The code is below: void Application_BeginRequest(object sender, EventArgs e) { var context = new AssignmentEntities(); context.Configuration.ProxyCreationEnabled = false; HttpContext.Current.Items["_context"] = context; } void Application_EndRequest(object sender, EventArgs e) { var context = HttpContext.Current.Items["_context"] as AssignmentEntities; if (context != null) { context.Dispose(); } }

    Read the article

  • Fastest way to generate delimited string from 1d numpy array

    - by Abiel
    I have a program which needs to turn many large one-dimensional numpy arrays of floats into delimited strings. I am finding this operation quite slow relative to the mathematical operations in my program and am wondering if there is a way to speed it up. For example, consider the following loop, which takes 100,000 random numbers in a numpy array and joins each array into a comma-delimited string. import numpy as np x = np.random.randn(100000) for i in range(100): ",".join(map(str, x)) This loop takes about 20 seconds to complete (total, not each cycle). In contrast, consider that 100 cycles of something like elementwise multiplication (x*x) would take than one 1/10 of a second to complete. Clearly the string join operation creates a large performance bottleneck; in my actual application it will dominate total runtime. This makes me wonder, is there a faster way than ",".join(map(str, x))? Since map() is where almost all the processing time occurs, this comes down to the question of whether there a faster to way convert a very large number of numbers to strings.

    Read the article

  • Generate money type fields using code first EF CTP5

    - by BBHorus
    In this blog post: EF4 Code First Control Unicode and Decimal Precision, Scale with Attributes, Dane Morgridge used attributes to control the creation of different types on your database. ...And I found this pretty unique BTW!!! How do I generate money type fields in my resulting database using code first API of EF CTP5, if is possible to do it from your model, using conventions or attributes? Sorry about my English is not my main language. Thanks in advance.

    Read the article

  • How to properly test Hibernate length restriction?

    - by Cesar
    I have a POJO mapped with Hibernate for persistence. In my mapping I specify the following: <class name="ExpertiseArea"> <id name="id" type="string"> <generator class="assigned" /> </id> <version name="version" column="VERSION" unsaved-value="null" /> <property name="name" type="string" unique="true" not-null="true" length="100" /> ... </class> And I want to test that if I set a name longer than 100 characters, the change won't be persisted. I have a DAO where I save the entity with the following code: public T makePersistent(T entity){ transaction = getSession().beginTransaction(); transaction.begin(); try{ getSession().saveOrUpdate(entity); transaction.commit(); }catch(HibernateException e){ logger.debug(e.getMessage()); transaction.rollback(); } return entity; } Actually the code above is from a GenericDAO which all my DAOs inherit from. Then I created the following test: public void testNameLengthMustBe100orLess(){ ExpertiseArea ea = new ExpertiseArea( "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890"); assertTrue("Name should be 100 characters long", ea.getName().length() == 100); ead.makePersistent(ea); List<ExpertiseArea> result = ead.findAll(); assertEquals("Size must be 1", result.size(),1); ea.setName(ea.getName()+"1234567890"); ead.makePersistent(ea); ExpertiseArea retrieved = ead.findById(ea.getId(), false); assertTrue("Both objects should be equal", retrieved.equals(ea)); assertTrue("Name should be 100 characters long", (retrieved.getName().length() == 100)); } The object is persisted ok. Then I set a name longer than 100 characters and try to save the changes, which fails: 14:12:14,608 INFO StringType:162 - could not bind value '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890' to parameter: 2; data exception: string data, right truncation 14:12:14,611 WARN JDBCExceptionReporter:100 - SQL Error: -3401, SQLState: 22001 14:12:14,611 ERROR JDBCExceptionReporter:101 - data exception: string data, right truncation 14:12:14,614 ERROR AbstractFlushingEventListener:324 - Could not synchronize database state with session org.hibernate.exception.DataException: could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] //Stack trace 14:12:14,615 DEBUG GenericDAO:87 - could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] 14:12:14,616 DEBUG JDBCTransaction:186 - rollback 14:12:14,616 DEBUG JDBCTransaction:197 - rolled back JDBC Connection That's expected behavior. However when I retrieve the persisted object to check if its name is still 100 characters long, the test fails. The way I see it, the retrieved object should have a name that is 100 characters long, given that the attempted update failed. The last assertion fails because the name is 110 characters long now, as if the ea instance was indeed updated. What am I doing wrong here?

    Read the article

  • Zend_DB_Table Update problem

    - by davykiash
    Am trying to construct a simple update query in my model class Model_DbTable_Account extends Zend_Db_Table_Abstract { protected $_name = 'accounts'; public function activateaccount($activationcode) { $data = array( 'accounts_status' => 'active', ); $this->update($data, 'accounts_activationkey = ' . $activationcode); } However I get an SQLSTATE[42S22]: Column not found: 1054 Unknown column 'my activation code value' in 'where clause' error. What am I missing in Zend_Table update construct?

    Read the article

  • Custom RIA Authentication

    - by cmaduro
    Following the steps in this post: http://forums.silverlight.net/forums/t/177042.aspx Where/How do I add the [Key] attribute on the Name property of the IAuthentication where User is one of my ADO.Net Entity objects? My options so far seem to be: In the designer codebehind of me ADO.Net Entity Model. Create a partial User class and add it there In the AuthenticationService.metadata.cs partial User class. It just does not seem to work no matter where I place the [Key] attribute.

    Read the article

  • Paperclip: delete attachments and "can't convert nil into String" error

    - by snitko
    I'm using Paperclip and here's what I do in the model to delete attachments: def before_save self.avatar = nil if @delete_avatar == 1.to_s end Works fine unless @delete_avatar flag is set when the user is actually uploading the image (so the model receives both params[:user][:avatar] and params[:user][:delete_avatar]. This results in the following error: TypeError: can't convert nil into String from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:40:in `dirname' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:40:in `flush_writes' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:38:in `each' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/storage.rb:38:in `flush_writes' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/attachment.rb:144:in `save' from /Work/project/src/vendor/plugins/paperclip/lib/paperclip/attachment.rb:162:in `destroy' from /Work/project/src/app/models/user.rb:72:in `before_save' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/callbacks.rb:347:in `send' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/callbacks.rb:347:in `callback' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/callbacks.rb:249:in `create_or_update' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:2538:in `save_without_validation' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/validations.rb:1078:in `save_without_dirty' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/dirty.rb:79:in `save_without_transactions' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:229:in `send' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:229:in `with_transaction_returning_status' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:182:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:228:in `with_transaction_returning_status' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:196:in `save' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:208:in `rollback_active_record_state!' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/transactions.rb:196:in `save' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:723:in `create' I assume it has something to do with the avatar.dirty? value because when it certainly is true when this happens. The question is, how do I totally reset the thing if there are changes to be saved and abort avatar upload when the flag is set?

    Read the article

  • Django Encoding Issues with MySQL

    - by Jordan Reiter
    Okay, so I have a MySQL database set up. Most of the tables are latin1 and Django handles them fine. But, some of them are UTF-8 and Django does not handle them. Here's a sample table (these tables are all from django-geonames): DROP TABLE IF EXISTS `geoname`; SET @saved_cs_client = @@character_set_client; SET character_set_client = utf8; CREATE TABLE `geoname` ( `id` int(11) NOT NULL, `name` varchar(200) NOT NULL, `ascii_name` varchar(200) NOT NULL, `latitude` decimal(20,17) NOT NULL, `longitude` decimal(20,17) NOT NULL, `point` point default NULL, `fclass` varchar(1) NOT NULL, `fcode` varchar(7) NOT NULL, `country_id` varchar(2) NOT NULL, `cc2` varchar(60) NOT NULL, `admin1_id` int(11) default NULL, `admin2_id` int(11) default NULL, `admin3_id` int(11) default NULL, `admin4_id` int(11) default NULL, `population` int(11) NOT NULL, `elevation` int(11) NOT NULL, `gtopo30` int(11) NOT NULL, `timezone_id` int(11) default NULL, `moddate` date NOT NULL, PRIMARY KEY (`id`), KEY `country_id_refs_iso_alpha2_e2614807` (`country_id`), KEY `admin1_id_refs_id_a28cd057` (`admin1_id`), KEY `admin2_id_refs_id_4f9a0f7e` (`admin2_id`), KEY `admin3_id_refs_id_f8a5e181` (`admin3_id`), KEY `admin4_id_refs_id_9cc00ec8` (`admin4_id`), KEY `fcode_refs_code_977fe2ec` (`fcode`), KEY `timezone_id_refs_id_5b46c585` (`timezone_id`), KEY `geoname_52094d6e` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; SET character_set_client = @saved_cs_client; Now, if I try to get data from the table directly using MySQLdb and a cursor, I get the text with the proper encoding: >>> import MySQLdb >>> from django.conf import settings >>> >>> conn = MySQLdb.connect (host = "localhost", ... user = settings.DATABASES['default']['USER'], ... passwd = settings.DATABASES['default']['PASSWORD'], ... db = settings.DATABASES['default']['NAME']) >>> cursor = conn.cursor () >>> cursor.execute("select name from geoname where name like 'Uni%Hidalgo'"); 1L >>> g = cursor.fetchone() >>> g[0] 'Uni\xc3\xb3n Hidalgo' >>> print g[0] Unión Hidalgo However, if I try to use the Geoname model (which is actually a django.contrib.gis.db.models.Model), it fails: >>> from geonames.models import Geoname >>> g = Geoname.objects.get(name__istartswith='Uni',name__icontains='Hidalgo') >>> g.name u'Uni\xc3\xb3n Hidalgo' >>> print g.name Unión Hidalgo There's pretty clearly an encoding error here. In both cases the database is returning 'Uni\xc3\xb3n Hidalgo' but Django is (incorrectly?) translating the '\xc3\xb3n' to ó. What can I do to fix this?

    Read the article

  • Joomla - Force File Download / CSV Export

    - by lautaro.dragan
    I'm in need of help... this is my first time asking a question in SO, so please be kind :) I'm trying to force-download a file from php, so when the user hits a certain button, he gets a file download. The file is a csv (email, username) of all registered users. I decided to add this button to the admin users screen, as you can see in this screenshot. So I added the following code to the addToolbar function in administrator/components/com_users/views/users/view.html.php: JToolBarHelper::custom('users.export', 'export.png', 'export_f2.png', 'Exportar', false); This button is mapped to the following function in the com_users\controller\users.php controller: public function exportAllUsers() { ob_end_clean(); $app = JFactory::getApplication(); header("Content-type: text/csv"); header("Content-Disposition: attachment; filename=ideary_users.csv"); header("Pragma: no-cache"); header("Expires: 0"); echo "email,name\n"; $model = $this->getModel("Users"); $users = $model->getAllUsers(); foreach ($users as $user) { echo $user->email . ", " . ucwords(trim($user->name)) . "\r\n"; } $app->close(); } Now, this is actually working perfectly fine. The issue here is that after I download a file, if I hit any button in the admin that causes a POST, instead of it performing the action it should, it just downloads the file over again! For example: I hit the "Export" button "users.csv" downloads Then, I hit the "search" button "users.csv" downloads... what the hell? I'm guessing that when I hit the export button, a JS gets called and sets a form's action attribute to an URL... and expects a response or something, and then other button's are prevented from re-setting the form's action attribute. I can't think of any real solution for this, but I'd rather avoid hacks if possible. So, what would be the standard, elegant solution that joomla offers in this case?

    Read the article

  • How can I extend this SQL query to find the k nearest neighbors?

    - by Smigs
    I have a database full of two-dimensional data - points on a map. Each record has a field of the geometry type. What I need to be able to do is pass a point to a stored procedure which returns the k nearest points (k would also be passed to the sproc, but that's easy). I've found a query at http://blogs.msdn.com/isaac/archive/2008/10/23/nearest-neighbors.aspx which gets the single nearest neighbour, but I can't figure how to extend it to find the k nearest neighbours. This is the current query - T is the table, g is the geometry field, @x is the point to search around, Numbers is a table with integers 1 to n: DECLARE @start FLOAT = 1000; WITH NearestPoints AS ( SELECT TOP(1) WITH TIES *, T.g.STDistance(@x) AS dist FROM Numbers JOIN T WITH(INDEX(spatial_index)) ON T.g.STDistance(@x) < @start*POWER(2,Numbers.n) ORDER BY n ) SELECT TOP(1) * FROM NearestPoints ORDER BY n, dist The inner query selects the nearest non-empty region and the outer query then selects the top result from that region; the outer query can easily be changed to (e.g.) SELECT TOP(20), but if the nearest region only contains one result, you're stuck with that. I figure I probably need to recursively search for the first region containing k records, but without using a table variable (which would cause maintenance problems as you have to create the table structure and it's liable to change - there're lots of fields), I can't see how.

    Read the article

  • Creating stub objects that can be "claimed"

    - by Sean Johnson
    I'm working with a client on a rails project that wants to have a user model with 'stub' accounts that are created by an administrator, but that can later be claimed by the actual user, with authentication enabled on that user once the owner has claimed it. Was wondering if anyone has done this before, and what the best approach would be. We're currently using Authlogic to handle authentication.

    Read the article

  • Question about creating device-compatible bitmaps in C#

    - by MusiGenesis
    I am storing bitmap-like data in a two-dimensional int array. To convert this array into a GDI-compatible bitmap (for use with BitBlt), I am using this function: public IntPtr GetGDIBitmap(int[,] data) { int w = data.GetLength(0); int h = data.GetLength(1); IntPtr ret = IntPtr.Zero; using (Bitmap bmp = new Bitmap(w, h)) { for (int x = 0; x < w; x++) { for (int y = 0; y < h; y++) { Color color = Color.FromArgb(data[x, y]); bmp.SetPixel(x, y, color); } } ret = bmp.GetHbitmap(); } return ret; } This works as expected, but the call to bmp.GetHbitmap() has to allocate memory for the returned bitmap. I'd like to modify this method in two (probably related) ways: I'd like to remove the intermediate Bitmap from the above code entirely, and go directly from my int[,] array to the device-compatible bitmap (i.e. the IntPtr). I presume this would involve calling CreateCompatibleBitmap, but I don't know how to go from that call to actually manipulating the pixel values. This should logically follow from the answer to the first, but I'd also like my method to re-use existing GDI bitmap handles (instead of creating a new bitmap each time). How can I do this? NOTE: I don't really use Bitmap.SetPixel(), as its performance could best be described as "glacial". The code is just for illustration.

    Read the article

  • Send an XML message to Amazon SQS

    - by bartligthart
    I am a newbie to Amazon SQS and ruby on rails. And i am working on a project that some XML messages must be send to SQS. How do i do that? Now i have this in the controller after the .save def create @thing = Thing.new(params[:thing]) respond_to do |format| if @thing.save message = @thing.to_xml and in the model inputqueue.send_message(message) Is this the way i can send an XML file to SQS or??

    Read the article

  • Keeping track of changes - Django

    - by RadiantHex
    Hi folks!! I have various models of which I would like to keep track and collect statistical data. The problem is how to store the changes throughout time. I thought of various alternative: Storing a log in a TextField, open it and update it every time the model is saved. Alternatively pickle a list and store it in a TextField. Save logs on hard drive. What are your suggestions?

    Read the article

  • Is there a future for Powerpoint VBA/VSTO?

    - by Sam Russo
    Does anyone know what the future holds for VBA/VSTO programming in Powerpoint? I've been working on a Office automation project and find it frustrating to work with Powerpoint in particular since it seems to be one level below VBA support found in Excel or Word. It feels like MS is trying to phase out support for VBA in PPT since they dropped macro recording in version 2007 and the object model lacks some key features support.

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >