Search Results

Search found 10177 results on 408 pages for 'thumbs db'.

Page 339/408 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • JSF ISO-8859-2 charset

    - by Vladimir
    Hi! I have problem with setting proper charset on my jsf pages. I use MySql db with latin2 (ISO-8859-2 charset) and latin2_croatian_ci collation. But, I have problems with setting values on backing managed bean properties. Page directive on top of my page is: <%@ page language="java" pageEncoding="ISO-8859-2" contentType="text/html; charset=ISO-8859-2" %> In head I included: <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-2"> And my form tag is: <h:form id="entityDetails" acceptcharset="ISO-8859-2"> I've created and registered Filter in web.xml with following doFilter method implementation: public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { request.setCharacterEncoding("ISO-8859-2"); response.setCharacterEncoding("ISO-8859-2"); chain.doFilter(request, response); } But, i.e. when I set managed bean property through inputText, all special (unicode) characters are replaced with '?' character. I really don't have any other ideas how to set charset to pages to perform well. Any suggestions? Thanks in advance.

    Read the article

  • How to convert an InputStream to a DataHandler?

    - by pcorey
    I'm working on a java web application in which files will be stored in a database. Originally we retrieved files already in the DB by simply calling getBytes on our result set: byte[] bytes = resultSet.getBytes(1); ... This byte array was then converted into a DataHandler using the obvious constructor: dataHandler=new DataHandler(bytes,"application/octet-stream"); This worked great until we started trying to store and retrieve larger files. Dumping the entire file contents into a byte array and then building a DataHandler out of that simply requires too much memory. My immediate idea is to retrieve a stream of the data in the database with getBinaryStream and somehow convert that InputStream into a DataHandler in a memory-efficient way. Unfortunately it doesn't seem like there's a direct way to convert an InputStream into a DataHandler. Another idea I've been playing with is reading chunks of data from the InputStream and writing them to the OutputStream of the DataHandler. But... I can't find a way to create an "empty" DataHandler that returns a non-null OutputStream when I call getOutputStream... Has anyone done this? I'd appreciate any help you can give me or leads in the right direction.

    Read the article

  • Rails 3 memory issue

    - by Erik
    Hello! I'm developing a new site based on Ruby on Rails 3 beta. I knew this might be a bad idea considering it's just beta, but I still thought it might work. Now though I'm having HUGE problems with Rails consuming huge ammounts of memory. For my application today it consumes about 10 mb per request and it doesn't seem to release it either. So I thought this might be because of bloat in my application and thus I created a test app just to compare. For my test app I just generated a model with a scaffold and then created about 20 records on this model. I then went to the index page and hit refresh and I could immediately see memory taking off! Less than my app but still about 1-3 mb per request. I'm working in OSX Leopard, with Ruby 1.8.7, Rails 3.0.0.beta and a SQLLite db for development. Does anyone recognize my problem? I would really appreciate some help here. :/ Thanks!

    Read the article

  • Pseudo random numbers in php

    - by pg
    I have a function that outputs items in a different order depending on a random number. for example 1/2 of the time Popeye's and its will be #1 on the list and Taco Bell and its logo will be #2 and half the time the it will be the other way around. The problem is that when a user reloads or comes back to the page, the order is re-randomized. $Range here is the number of items in the db, so it's using a random number between 1 and the $range. $random = mt_rand(1,$range); for ($i = 0 ; $i < count($variants); $i++) { $random -= $variants[$i]['weight']; if ($random <= 0) { $chosenoffers[$tag] = $variants[$i]; break; } } I went to the beginning of the session and set this: if (!isset($_SESSION['rannum'])){ $_SESSION['rannum']=rand(1,100); } With the idea that I could replace the mt_rand in the function with some sort of pseudo random generator that used the same 1-100 random number as a seed throughout the session. That way i won't have to rewrite all the code that was already written. Am I barking up the wrong tree or is this a good idea?

    Read the article

  • Mysql Powerdns issue when adding a new CName record

    - by Roland
    I'm trying to add a new CNAME record in PowerDNS to the mysql database, but my website does not want to show. When adding it in via Zone Admin it works, but as soon as I add the record as below it just does not want to work. Am I doing something wrong here. I checked if my record looks exactly the same in the DB as the record added with PowerDNS and it does. $type = 'CNAME'; //Adding the subdomain to the DNS database $sql = "insert into records " . "(domain_id, name,type,content,ttl,prio,change_date) values("; $sql .= $domain_id . ",'"; $sql .= trim($subdomain).".". trim($domain) . "','"; $sql .= trim($type) . "','"; $sql .= trim($domain) . "',"; $sql .= "3600,0,'".time()."')";

    Read the article

  • Limit foreign key choices in select in an inline form in admin

    - by mightyhal
    Edited :-) Hopefully a bit clearer now. The logic is of the model is: A Building has many Rooms A Room may be inside another Room (a closet, for instance--ForeignKey on 'self') A Room can only in inside of another Room in the same building (this is the tricky part) Here's the code I have: #spaces/models.py from django.db import models class Building(models.Model): name=models.CharField(max_length=32) def __unicode__(self): return self.name class Room(models.Model): number=models.CharField(max_length=8) building=models.ForeignKey(Building) inside_room=models.ForeignKey('self',blank=True,null=True) def __unicode__(self): return self.number and: #spaces/admin.py from ex.spaces.models import Building, Room from django.contrib import admin class RoomAdmin(admin.ModelAdmin): pass class RoomInline(admin.TabularInline): model = Room extra = 2 class BuildingAdmin(admin.ModelAdmin): inlines=[RoomInline] admin.site.register(Building, BuildingAdmin) admin.site.register(Room) The inline will display only rooms in the current building (which is what I want). The problem, though, is that for the inside_room drop down, it displays all of the rooms in the Rooms table (including those in other buildings). In the inline of rooms, I need to limit the inside_room choices to only rooms which are in the current building being displayed by the main form. I can't figure out a way to do it with either a limit_choices_to in the model, nor can I figure out how exactly to override the admin's inline formset properly (I feel like I should be somehow create a custom inline form, pass the building_id of the main form to the custom inline, then limit the queryset for the field's choices based on that--but I just can't wrap my head around how to do it). Maybe this is too complex for the admin site, but it seems like something that would be generally useful... Thanks again for your help!

    Read the article

  • Specifying ASP.NET MVC attributes for auto-generated data models

    - by Lyubomyr Shaydariv
    Hello to everyone. I'm very new to ASP.NET MVC (as well as ASP.NET in general), and going to gain some knowledge for this technology, so I'm sorry I can ask some trivial questions. I have installed ASP.NET MVC 3 RC1 and I'm trying to do the following. Let's consider that I have a model that's completely auto-generated from a table using the "LINQ to SQL Classes" template in VS2010. The template generates 3 files (two .cs files and one .layout file respectively), and the generated partial class is expected to be used as an MVC model. Let's also consider, a single DB column, that's mapped into the model, may look like this: [global::System.Data.Linq.Mapping.ColumnAttribute(Storage = "_Name", DbType = "VarChar(128)")] public string Name { get { return this._Name; } set { if ( (this._Name != value) ) { // ... generated stuff goes here } } } The ASP.NET MVC engine also provides a beautiful declarative way to specify some additional stuff, like RequiredAttribute, DisplayNameAttribute and other nice attributes. But since the mapped model is a purely auto-genereated model, I've realized that I should not change the model manually, and specify the fields like: [Required] [DisplayName("Project name")] [StringLength(128)] [global::System.Data.Linq.Mapping.ColumnAttribute(Storage = "_Name", DbType = "VarChar(128)")] public string Name { ... though this approach works perfectly... until I change the model in the DBML-designer removing the ASP.NET MVC attributes automatically. So, how do I specify ASP.NET MVC attributes for the DBML models and their fields safely? Thanks in advance, and Merry Christmas.

    Read the article

  • Excel 2003 VBA - Method to duplicate this code that select and colors rows

    - by Justin
    so this is a fragment of a procedure that exports a dataset from access to excel Dim rs As Recordset Dim intMaxCol As Integer Dim intMaxRow As Integer Dim objxls As Excel.Application Dim objWkb As Excel.Workbook Dim objSht As Excel.Worksheet Set rs = CurrentDb.OpenRecordset("qryOutput", dbOpenSnapshot) intMaxCol = rs.Fields.Count If rs.RecordCount > 0 Then rs.MoveLast: rs.MoveFirst intMaxRow = rs.RecordCount Set objxls = New Excel.Application objxls.Visible = True With objxls Set objWkb = .Workbooks.Add Set objSht = objWkb.Worksheets(1) With objSht On Error Resume Next .Range(.Cells(1, 1), .Cells(intMaxRow, intMaxCol)).CopyFromRecordset rs .Name = conSHT_NAME .Cells.WrapText = False .Cells.EntireColumn.AutoFit .Cells.RowHeight = 17 .Cells.Select With Selection.Font .Name = "Calibri" .Size = 10 End With .Rows("1:1").Select With Selection .Insert Shift:=xlDown End With .Rows("1:1").Interior.ColorIndex = 15 .Rows("1:1").RowHeight = 30 .Rows("2:2").Select With Selection.Interior .ColorIndex = 40 .Pattern = xlSolid End With .Rows("4:4").Select With Selection.Interior .ColorIndex = 40 .Pattern = xlSolid End With .Rows("6:6").Select With Selection.Interior .ColorIndex = 40 .Pattern = xlSolid End With .Rows("1:1").Select With Selection.Borders(xlEdgeBottom) .LineStyle = xlContinuous .Weight = xlMedium .ColorIndex = xlAutomatic End With End With End With End If Set objSht = Nothing Set objWkb = Nothing Set objxls = Nothing Set rs = Nothing Set DB = Nothing End Sub see where I am looking at coloring the rows. I wanted to select and fill (with any color) every other row, kinda like some of those access reports. I can do it manually coding each and every row, but two problems: 1) its a pain 2) i don't know what the record count is before hand. How can I make the code more efficient in this respect while incorporating the recordcount to know how many rows to "loop through" EDIT: Another question I have is with the selection methods I am using in the module, is there a better excel syntax instead of these with selections.... .Cells.Select With Selection.Font .Name = "Calibri" .Size = 10 End With is the only way i figure out how to accomplish this piece, but literally every other time I run this code, it fails. It says there is no object and points to the .font ....every other time? is this because the code is poor, or that I am not closing the xls app in the code? if so how do i do that? Thanks as always!

    Read the article

  • EF 4.0 : Save Changes Retry Logic

    - by BGR
    Hi, I would like to implement an application wide retry system for all entity SaveChanges method calls. Technologies: Entity framework 4.0 .Net 4.0 namespace Sample.Data.Store.Entities { public partial class StoreDB { public override int SaveChanges(System.Data.Objects.SaveOptions options) { for (Int32 attempt = 1; ; ) { try { return base.SaveChanges(options); } catch (SqlException sqlException) { // Increment Trys attempt++; // Find Maximum Trys Int32 maxRetryCount = 5; // Throw Error if we have reach the maximum number of retries if (attempt == maxRetryCount) throw; // Determine if we should retry or abort. if (!RetryLitmus(sqlException)) throw; else Thread.Sleep(ConnectionRetryWaitSeconds(attempt)); } } } static Int32 ConnectionRetryWaitSeconds(Int32 attempt) { Int32 connectionRetryWaitSeconds = 2000; // Backoff Throttling connectionRetryWaitSeconds = connectionRetryWaitSeconds * (Int32)Math.Pow(2, attempt); return (connectionRetryWaitSeconds); } /// <summary> /// Determine from the exception if the execution /// of the connection should Be attempted again /// </summary> /// <param name="exception">Generic Exception</param> /// <returns>True if a a retry is needed, false if not</returns> static Boolean RetryLitmus(SqlException sqlException) { switch (sqlException.Number) { // The service has encountered an error // processing your request. Please try again. // Error code %d. case 40197: // The service is currently busy. Retry // the request after 10 seconds. Code: %d. case 40501: //A transport-level error has occurred when // receiving results from the server. (provider: // TCP Provider, error: 0 - An established connection // was aborted by the software in your host machine.) case 10053: return (true); } return (false); } } } The problem: How can I run the StoreDB.SaveChanges to retry on a new DB context after an error occured? Something simular to Detach/Attach might come in handy. Thanks in advance! Bart

    Read the article

  • Adding titles and descriptions to images in Refinerycms-Portfolio

    - by John Deely
    I'm using the Refinery content management system with a the Portfolio plugin see http://github.com/resolve/refinerycms I wanted to create a title and description for images uploaded to Refinery using Refinerycms-Portfolio so far I have done the following; added the columns to the images table $script/generate migration AddTitleToImages title:string $script/generate migration AddBodyToImages body:text $rake db:migrate modified the field div in this file, * highlighted vendor/plugins/images/app/views/admin/images/_form.html.erb Use current image or, replace it with this one... ***** ***** ***** ***** "wymeditor", :rows = 7 % Added these lines to the main image partial in vendor/plugins/refinerycms-portolio/app/views/portfolio/ _main_image.html.erb <%= @image.body % This works in the back-end except for a few visual bugs. The problem with this is when i click through the thumbnails in the front-end the titles and descriptions keep stacking on top of the previous titles and descriptions. The main image changes fine, but instead of refreshing the title and description, it adds the new one above the previous titles and descriptions. How can i stop this repetition so that only one title and description will show at a time? I'm new to Rails and I am using Rails-2.3.5 and I suspect this can be solved using Java Script thanks in advance, John

    Read the article

  • Automation Error upon running VBA script in Excel

    - by brohjoe
    Hi guys, I'm getting an Automation error upon running VBA code in Excel 2007. I'm attempting to connect to a remote SQL Server DB and load data to from Excel to SQL Server. The error I get is, "Run-time error '-2147217843(80040e4d)': Automation error". I checked out the MSDN site and it suggested that this may be due to a bug associated with the sqloledb provider and one way to mitigate this is to use ODBC. Well I changed the connection string to reflect ODBC provider and associated parameters and I'm still getting the same error. Here is the code with ODBC as the provider: Dim cnt As ADODB.Connection Dim rst As ADODB.Recordset Dim stSQL As String Dim wbBook As Workbook Dim wsSheet As Worksheet Dim rnStart As Range Public Sub loadData() 'This was set up using Microsoft ActiveX Data Components version 6.0. 'Create ADODB connection object, open connection and construct the connection string object. Set cnt = New ADODB.Connection cnt.ConnectionString = _ "Driver={SQL Server}; Server=onlineSQLServer2010.foo.com; Database=fooDB Uid=logonalready;Pwd='helpmeOB1';" cnt.Open On Error GoTo ErrorHandler 'Open Excel and run query to export data to SQL Server. strSQL = "SELECT * INTO SalesOrders FROM OPENDATASOURCE('Microsoft.ACE.OLEDB.12.0', & _ "'Data Source=C:\Database.xlsx; Extended Properties=Excel 12.0')...[SalesOrders$]" cnt.Execute (strSQL) 'Error handling. ErrorExit: 'Reclaim memory from the connection objects Set rst = Nothing Set cnt = Nothing Exit Sub ErrorHandler: MsgBox Err.Description, vbCritical Resume ErrorExit 'clean up and reclaim memory resources. cnt.Close If CBool(cnt.State And adStateOpen) Then Set rst = Nothing Set cnt = Nothing End If End Sub

    Read the article

  • ORM model and DAO in my particular case

    - by EugeneP
    I have the DB structure as follows: table STUDENT (say, id, surname, etc) table STUDENT_PROPERTIES (say, name_of_the_property:char, value_of_the_property:char, student_id:FK) table COURSE (id, name, statusofcourse_id) table STATUSOFCOURSE (id, name_of_status:char ('active','inactive','suspended' etc)) table STUDENT_COURSE (student_id,course_id,statusofcourse_id) Let's try to pick up domain objects in my database: Student and Course are main entities. Student has a list of courses he attends, also he has a list of properties, that is all for this student. Next, Course entitity. It may contain a list of students that attend it. But in fact, the whole structure looks like this: the starting point is Student, with it's PK we can look a list of his properties, then we look into the STUDENT_COURSE and extract both FK of the Course entity and also the Status of the combination, it would look like "Student named bla bla, with all his properties, attends math and the status of it is "ACTIVE". now, quotation 1) Each DAO instance is responsible for one primary domain object or entity. If a domain object has an independent lifecycle, it should have its own DAO. 2) The DAO is responsible for creations, reads (by primary key), updates, and deletions -- that is, CRUD -- on the domain object. Now, first question is What are entities in my case? Student, Course, Student_Course, Status = all except for StudentProperties? Do I have to create a separate DAO for every object?

    Read the article

  • PHP Multiple Calls to Server Share Objects?

    - by user1513171
    I’m wondering this about PHP on Apache. Do multiple calls to the server from different users—could be sitting next to each other, in different states, different countries, etc…—share memory? For example, if I create a static variable in a PHP script and set it to 1 by default, then user1 comes in and it changes to 2, and then almost at the exactly same time, user2 comes in, does he see that static variable with a value of 1 or 2? An even better example is this class I have in PHP: class ApplicationRegistry { private static $instance; private static $PDO; private function __construct() { self::$PDO = $db = new \PDO('mysql:unix_socket=/........'); self::$PDO->setAttribute(\PDO::ATTR_ERRMODE, \PDO::ERRMODE_EXCEPTION); } static function instance() { if(!isset(self::$instance)) { self::$instance = new self(); } return self::$instance; } static function getDSN() { if(!isset(self::$PDO)) { self::instance(); return self::$PDO; } return self::$PDO; } } So this is a Singleton that has a static PDO instance. If user1 and user2 are hitting the server at the exact same time are they using different instances of PDO or are they using the same one? This is a confusing concept for me and I'm trying to think of how my application will scale.

    Read the article

  • Heroku taps push weirdness...

    - by holden
    I have the strangest experience using taps to move data between my machine and heroku. It works fine except that it seems to loose 0s directly behind the decimal place for my geo coordinates. Ie 50.0519322 for some reason gets set to 50.519322... no idea why. When I pull the data from the remote location ie. heroku db:pull... it works fine, all decimal places intact on my machine, however, when i push it back to the remote server it loses these zeros. Especially directly behind the decimal place, though I haven't noticed it elsewhere yet. At first I was storing the lat and lng as simply numeric but refined it to: change_column :places, :lat, :numeric, :precision => 15, :scale => 10 change_column :places, :lng, :numeric, :precision => 15, :scale => 10 With no result, any ideas what's going on? From the console on the remote server i get the lat as being: #<BigDecimal:2aebcc5967c0,'0.50519322E2',18(18)> and my machine as: #<BigDecimal:10232f7c8,'0.50519322E2',12(16)> which is also odd, the second one because it shows up as 50.0519322 when i edit it thru my view but when i do to_f via console it gives me 50.519322

    Read the article

  • Ruby / Rails - How to aggregate a Query Results in an Array?

    - by AnApprentice
    Hello, I have a large data set that I want to clean up for the user. The data set from the DB looks something like this: ID | project_id | thread_id | action_type |description 1 | 10 | 30 | comment | yada yada yada yada yada 1 | 10 | 30 | comment | xxx 1 | 10 | 30 | comment | yada 313133 1 | 10 | 33 | comment | fdsdfsdfsdfsdfs 1 | 10 | 33 | comment | yada yada yada yada yada 1 | 10 | | attachment | fddgaasddsadasdsadsa 1 | 10 | | attachment | xcvcvxcvxcvxxcvcvxxcv Right now, when I output the above in my view its in the very same order as above, problem is it is very repetitive. For example, for project_id 10 & thread_id 30 you see: 10 - 30 - yada yada yada yada yada 10 - 30 - xxxxx 10 - 30 - yada yada yada yada yada What I would like to learn how to do in ruby, is some how create an array and aggreate descriptions under a project_id and thread_id, so instead the output is: 10 - 30 - yada yada yada yada yada - xxxxx - yada yada yada yada yada Any advice on where to get started? This requirement is new for me so I would appreciate your thoughts on what you're thinking the best way to solve this is.Hopefully this can be done in ruby and not sql, as the activity feed is likely going to grow in event types and complexity. Thanks

    Read the article

  • Data in two databases, eager spool resulting in query

    - by Valkyrie
    I have two databases in SQL2k5: one that holds a large amount of static data (SQL Database 1) (never updated but frequently inserted into) and one that holds relational data (SQL Database 2) related to the static data. They're separated mainly because of corporate guidelines and business requirements: assume for the following problem that combining them is not practical. There are places in SQLDB2 that PKs in SQLDB1 are referenced; triggers control the referential integrity, since cross-database relationships are troublesome in SQL Server. BUT, because of the large amount of data in SQLDB1, I'm getting eager spools on queries that join from the Id in SQLDB2 that references the data in SQLDB1. (With me so far? Maybe an example will help:) SELECT t.Id, t.Name, t2.Company FROM SQLDB1.table t INNER JOIN SQLDB2.table t2 ON t.Id = t2.FKId This query results in a eager spool that's 84% of the load of the query; the table in SQLDB1 has 35M rows, so it's completely choking this query. I can't create a view on the table in SQLDB1 and use that as my FK/index; it doesn't want me to create a constraint based on a view. Anyone have any idea how I can fix this huge bottleneck? (Short of putting the static data in the first db: believe me, I've argued that one until I'm blue in the face to no avail.) Thanks! valkyrie Edit: also can't create an indexed view because you can't put schemabinding on a view that references a table outside the database where the view resides. Dang it.

    Read the article

  • Using a .MDF SQL Server Database with ASP.NET Versus Using SQL Server

    - by Maxim Z.
    I'm currently writing a website in ASP.NET MVC, and my database (which doesn't have any data in it yet, it only has the correct tables) uses SQL Server 2008, which I have installed on my development machine. I connect to the database out of my application by using the Server Explorer, followed by LINQ to SQL mapping. Once I finish developing the site, I will move it over to my hosting service, which is a virtual hosting plan. I'm concerned about whether using the SQL Server setup that is currently working on my development machine will be hard to do on the production server, as I'll have to import all the database tables through the hosting control panel. I've noticed that it is possible to create a SQL Server database from inside Visual Studio. It is then stored in the App_Data directory. My questions are the following: Does it make sense to move my SQL Server DB out of SQL Server and into the App_Data directory as an .mdf file? If so, how can I move it? I believe this is called the Detach command, is it not? Are there any performance/security issues that can occur with a .mdf file like this? Would my intended setup work OK with a typical virtual hosting plan? I'm hoping that the .mdf database won't count against the limited number of SQL Server databases that can be created with my plan. I hope this question isn't too broad. Thanks in advance! Note: I'm just starting out with ASP.NET MVC and all this, so I might be completely misunderstanding how this is supposed to work.

    Read the article

  • Question about MySQLdb, OS X 10.5, and authentication

    - by timpone
    I'm a noob at Python and have been having problems with MySQLdb and OS X Leopard 10.5. I have a php app that is doing db access just fine with pdo but also want to access with Python. When I use the same credentials with MySQLdb as php, I get the following error: File "build/bdist.macosx-10.5-i386/egg/MySQLdb/connections.py", line 188, in __init__ _mysql_exceptions.OperationalError: (1045, "Access denied for user 'arc_db'@'localhost' (using password: YES)") The authentication piece works fine on my ubuntu server (installed via apt-get) implying that it is something specific to my OS X MySQLdb install. Looking at some postings, I thought it would be my local build of MySQLdb which seems to be problematic with OS X. But I am able to import fine: Python 2.5.1 (r251:54863, Feb 6 2009, 19:02:12) [GCC 4.0.1 (Apple Inc. build 5465)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import MySQLdb >>> Also, wanting to create a positive, I am able to access and return results from a database tilted test_something (which presumably bypasses the MySQL's authtentication - not sure exactly how though). Trying to figure out a little more what is going on, I turn on logging for mysql and get the following (added my own comments): 100609 19:09:45 3 Connect Access denied for user 'arc_db'@'localhost' (using password: YES) //not worked 100609 19:10:02 4 Connect arc_db@localhost on arc_development //did work I'm not really sure what the 3 or 4 means but presumably a sucess or failue. So, I guess what would be the next step? Am I doing some obvious stupid python mistake (very likely)? Is there a better way for me to prove that this should / can be working? Is there any way to determine what MySQLdb is sending exactly in its authentication message to MySQL? thanks

    Read the article

  • GeoDjango: Exceptions for basic geographic queries

    - by omat
    I am having problems with geographic queries with GeoDjango running on SpatiaLite on my development environment. from django.contrib.gis.db import models class Test(models.Model): poly = models.PolygonField() point = models.PointField() geom = models.GeometryField() objects = models.GeoManager() Testing via shell: >>> from geotest.models import Test >>> from django.contrib.gis.geos import GEOSGeometry >>> >>> point = GEOSGeometry("POINT(0 0)") >>> point <Point object at 0x105743490> >>> poly = GEOSGeometry("POLYGON((0 0, 0 1, 1 1, 1 0, 0 0))") >>> poly <Polygon object at 0x105743370> >>> With these definitions, lets try some basic geographic queries. First contains: >>> Test.objects.filter(point__within=poly) Assertion failed: (0), function appendGeometryTaggedText, file WKTWriter.cpp, line 228. Abort trap And the django shell dies. And with within: >>> Test.objects.filter(poly__contains=point) GEOS_ERROR: Geometry must be a Point or LineString Traceback (most recent call last): ... GEOSException: Error encountered checking Coordinate Sequence returned from GEOS C function "GEOSGeom_getCoordSeq_r". >>> Other combinations also cause a variety of exceptions. I must be missing something obvious as these are very basic. Any ideas?

    Read the article

  • 'Invalid column name [ColumnName]' on a nested linq query.

    - by Joe
    I've got the following query: ATable .GroupBy(x=> new {FieldA = x.FieldAID, FieldB = x.FieldBID, FieldC = x.FieldCID}) .Select(x=>new {FieldA = x.Key.FieldA, ..., last_seen = x.OrderByDescending(y=>y.Timestamp).FirstOrDefault().Timestamp}) results in: SqlException: Invalid column name 'FieldAID' x 5 SqlException: Invalid column name 'FieldBID' x 5 SqlException: Invalid column name 'FieldCID' x 1 I've worked out it has to do with the last query to Timestamp because this works: ATable .GroupBy(x=> new {FieldA = x.FieldAID, FieldB = x.FieldBID, FieldC = x.FieldCID}) .Select(x=>new {FieldA = x.Key.FieldA, ..., last_seen = x.OrderByDescending(y=>y.Timestamp).FirstOrDefault()}) The query has been simplified. The purpose is to group by a set of variables and then show the last time this grouping occured in the db. I'm using Linqpad 4 to generate these results so the Timestamp gives me a string whereas FirstOrDefault gives me the whole object which isn't ideal. Update On further testing I've noticed that the number and type of SQLException is related to the class created in the groupby clause. So, ATable .GroupBy(x=> new {FieldA = x.FieldAID}) .Select(x=>new {FieldA = x.Key.FieldA, last_seen = x.OrderByDescending(y=>y.Timestamp).FirstOrDefault()}) results in SqlException: Invalid column name 'FieldAID' x 5

    Read the article

  • Big sinatra problems

    - by Joel M.
    Hi, So I'm having huge trouble with sinatra. Here's what I have: require 'dm-core' DataMapper.setup(:default, ENV['DATABASE_URL'] || 'sqlite3://my.db') class Something include DataMapper::Resource property :id, Serial property :thing, Text property :run_in, Integer property :added_at, DateTime property :to, String def schedule cronify(self.thing+" to "+self.to, "http://url"+self.id.to_s, self.run_in) end def notify text(self.thing, self.to) end end Something.auto_upgrade! The cronify method works. I tested it in irb. Also, the schedule instance method works, I tested it in the console. However, it doesn't work in the route, even though it works in the console. post '/add' do @something = Something.create(blah) #this works fine @something.schedule #this works fine in the console; not in the route. end I've tried everything, from @something.create(blah).schedule (which also works fine in the console, but not in the route), to defining the method cronify inside the sinatra helpers, and even calling cronify directly on the route. Nothing works on the route. What am I doing wrong?

    Read the article

  • Data sharing amongst JPA Entities

    - by Nick
    Setup: I have a simple web app that has a handfull of forms, each on a separate page. These forms represent patient data. There is a one-to-one relationship between patient and all these forms/entities. Each form maps directly to a db table and a JPA entity, maybe not the best architecture but it works and is simple. Question: If form/entity A and form/entity B share a common chunk of data (one of more fields), what is the best way to handle that in JPA. I.E. - If the data gets inserted via form A, I need it to show up in form B as existing data and vice versa. In other words its logical for both entities to contain that data. I believe I will have to move the common data into its own entity and define the relationships that way, but I have tried many different ways and none gets me all the way, at least with basic JPA. Can this be done through pure JPA relationships or will I have to write a bunch of code to make this happen manually. Not looking for code specifically, just the correct way to model this data. Thanks.

    Read the article

  • FluentNHibernate SQLite configuration exception - after switching to .net4

    - by stiank81
    I get an exception thrown when trying to use Fluent to configure my NHibernate connection to SQLite. The code I use to configure is as follows: var cfg = Fluently.Configure(). Database(SQLiteConfiguration.Standard.ShowSql().UsingFile("MyDb.db")). Mappings(m => m.FluentMappings.AddFromAssemblyOf<MappingsPersistenceModel>()); _sessionFactory = cfg.BuildSessionFactory(); A HibernateException is thrown when BuildSessionFactory() is called, saying: Could not create the driver from NHibernate.Driver.SQLite20Driver, NHibernate, Version=2.1.2.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4. It has an InnerException: Exception has been thrown by the target of an invocation. Which again has an InnerException: The IDbCommand and IDbConnection implementation in the assembly System.Data.SQLite could not be found. Ensure that the assembly System.Data.SQLite is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use element in the application configuration file to specify the full name of the assembly. Now - to me it sounds like it doesn't find System.Data.SQLite.dll, but I can't understand this. Everywhere this is referenced I have "Copy Local", and I have verified that it is in every build folder for projects using SQLite. I have also copied it manually to every Debug folder of the solution - without luck. What can be causing this? Any ideas? My suspicion is that it is related to .Net4 somehow. The reason is that it worked just fine when I used .Net3.5, and then I changed to .Net4, and the problem started. You can also check out this other question for a more general approach towards Fluent-.Net4 compatibility.

    Read the article

  • Identifying and Resolving Oracle ITL Deadlock

    - by Allan
    I have an Oracle DB package that is routinely causing what I believe is an ITL (Interested Transaction List) deadlock. The relevant portion of a trace file is below. Deadlock graph: ---------Blocker(s)-------- ---------Waiter(s)--------- Resource Name process session holds waits process session holds waits TM-0000cb52-00000000 22 131 S 23 143 SS TM-0000ceec-00000000 23 143 SX 32 138 SX SSX TM-0000cb52-00000000 30 138 SX 22 131 S session 131: DID 0001-0016-00000D1C session 143: DID 0001-0017-000055D5 session 143: DID 0001-0017-000055D5 session 138: DID 0001-001E-000067A0 session 138: DID 0001-001E-000067A0 session 131: DID 0001-0016-00000D1C Rows waited on: Session 143: no row Session 138: no row Session 131: no row There are no bit-map indexes on this table, so that's not the cause. As far as I can tell, the lack of "Rows waited on" plus the "S" in the Waiter waits column likely indicates that this is an ITL deadlock. Also, the table is written to quite often (roughly 8 insert or updates concurrently, as often as 240 times a minute), so and ITL deadlock seems like a strong possibility. I've increased the INITRANS parameter of the table and it's indexes to 100 and increased the PCT_FREE on the table from 10 to 20 (then rebuilt the indexes), but the deadlocks are still occurring. The deadlock seems to happen most often during an update, but that could just be a coincidence, as I've only traced it a couple of times. My questions are two-fold: 1) Is this actually an ITL deadlock? 2) If it is an ITL deadlock, what else can be done to avoid it?

    Read the article

  • How do people know so much about programming?

    - by Luciano
    I see people in this forums with a lot of points, so I assume they know about a lot of different programming stuff. When I was young I knew about basic (commodore) and the turbo pascal (pc). Then in college I learnt about C, memory management, x86 set, loop invariants, graphs, db query optimization, oop, functional, lambda calculus, prolog, concurrency, polymorphism, newton method, simplex, backtracking, dynamic programming, heuristics, np completeness, LR, LALR, neural networks, static & dynamic typing, turing, godel, and more in between. Then in industry I started with Java several years ago and learnt about it, and its variety of frameworks, and also design patterns, architecture patterns, web development, server development, mobile development, tdd, bdd, uml, use cases, bug trackers, process management, people management if you are a tech lead, profiling, security concerns, etc. I started to forget what I learnt in college... And then there is the stuff I don't know yet, like python, .net, perl, JVM stuff like groovy or scala.. Of course Google is a must for rapid documentation access to know if a problem has been solved already and how, and to keep informed about new stuff by blogs and places like this one. It's just too much or I just have a bad memory.. how do you guys manage it?

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >