Search Results

Search found 7897 results on 316 pages for 'generate'.

Page 256/316 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • django: control json serialization

    - by abolotnov
    Is there a way to control json serialization in django? Simple code below will return serialized object in json: co = Collection.objects.all() c = serializers.serialize('json',co) The json will look similar to this: [ { "pk": 1, "model": "picviewer.collection", "fields": { "urlName": "architecture", "name": "\u0413\u043e\u0440\u043e\u0434 \u0438 \u0430\u0440\u0445\u0438\u0442\u0435\u043a\u0442\u0443\u0440\u0430", "sortOrder": 0 } }, { "pk": 2, "model": "picviewer.collection", "fields": { "urlName": "nature", "name": "\u041f\u0440\u0438\u0440\u043e\u0434\u0430", "sortOrder": 1 } }, { "pk": 3, "model": "picviewer.collection", "fields": { "urlName": "objects", "name": "\u041e\u0431\u044a\u0435\u043a\u0442\u044b \u0438 \u043d\u0430\u0442\u044e\u0440\u043c\u043e\u0440\u0442", "sortOrder": 2 } } ] You can see it's serializing it in a way that you are able to re-create the whole model, shall you want to do this at some point - fair enough, but not very handy for simple JS ajax in my case: I want bring the traffic to minimum and make the whole thing little clearer. What I did is I created a view that passes the object to a .json template and the template will do something like this to generate "nicer" json output: [ {% if collections %} {% for c in collections %} {"id": {{c.id}},"sortOrder": {{c.sortOrder}},"name": "{{c.name}}","urlName": "{{c.urlName}}"}{% if not forloop.last %},{% endif %} {% endfor %} {% endif %} ] This does work and the output is much (?) nicer: [ { "id": 1, "sortOrder": 0, "name": "????? ? ???????????", "urlName": "architecture" }, { "id": 2, "sortOrder": 1, "name": "???????", "urlName": "nature" }, { "id": 3, "sortOrder": 2, "name": "??????? ? ?????????", "urlName": "objects" } ] However, I'm bothered by the fast that my solution uses templates (an extra step in processing and possible performance impact) and it will take manual work to maintain shall I update the model, for example. I'm thinking json generating should be part of the model (correct me if I'm wrong) and done with either native python-json and django implementation but can't figure how to make it strip the bits that I don't want. One more thing - even when I restrict it to a set of fields to serialize, it will keep the id always outside the element container and instead present it as "pk" outside of it.

    Read the article

  • convert list of relative widths to pixel widths

    - by mkoryak
    This is a code review question more then anything. I have the following problem: Given a list of relative widths (no unit whatsoever, just all relative to each other), generate a list of pixel widths so that these pixel widths have the same proportions as the original list. input: list of proportions, total pixel width. output: list of pixel widths, where each width is an int, and the sum of these equals the total width. Code: var sizes = "1,2,3,5,7,10".split(","); //initial proportions var totalWidth = 1024; // total pixel width var sizesTotal = 0; for (var i = 0; i < sizes.length; i++) { sizesTotal += parseInt(sizes[i], 10); } if(sizesTotal != 100){ var totalLeft = 100;; for (var i = 0; i < sizes.length; i++) { sizes[i] = Math.floor(parseInt(sizes[i], 10) / sizesTotal * 100); totalLeft -= sizes[i]; } sizes[sizes.lengh - 1] = totalLeft; } totalLeft = totalWidth; for (var i = 0; i < sizes.length; i++) { widths[i] = Math.floor(totalWidth / 100 * sizes[i]) totalLeft -= widths[i]; } widths[sizes.lenght - 1] = totalLeft; //return widths which contains a list of INT pixel sizes

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • Find the Algorithm that generates the checksum

    - by knivmannen
    I have a sensing device that transmits a 6-byte message along with an 1-byte counter and supposely a checksum. The data looks something like this: ------DATA----------- -Counter- --Checksum?-- 55 FF 00 00 EC FF ---- 60---------- 1F The last four bits in the counter are always set 0, i.e those bits are probably not used. The last byte is assumed to be the checksum since it has a quite peculiar nature. It tends to randomly change as data changes. Now what i need is to find the algorithm to compute this checksum based on --DATA--. what i have tried is all possible CRC-8 polynomials, for each polynomial i have tried to reflect data, toggle it, initiate it with non-zeroes etc etc. Ive come to the conclusion that i am not dealing with a normal crc-algorithm. I have also tried some flether and adler methods without succes, xor stuff back and forth but still i have no clue how to generate the checksum. My biggest concern is, how is the counter used??? Same data but with different countervalue generates different checksums. I have tried to include the counter in my computations but without any luck. Here are some other datasamples: 55 FF 00 00 F0 FF A0 38 66 0B EA FF BF FF C0 CA 5E 18 EA FF B7 FF 60 BD F6 30 16 00 FC FE 10 81 One more thing that might be worth mentioning is that the last byte in the data only takes on the values FF or FE Plz if u have any tips or tricks that i may try post them here, I am truly desperate. Thx

    Read the article

  • How to deploy a number of disparate project types?

    - by niteice
    This question is similar to http://stackoverflow.com/questions/1900269/whats-the-best-way-to-deploy-an-executable-process-on-a-web-server. The situation is this: I'm developing a product that needs to be deployed to a web server. It consists of 4 website projects, a background service, a couple of command-line tools, and two assemblies shared by all of these components. Now, I also happen to administer the server that this product will be deployed on. So I'm familiar with everything that may need to be done to perform an update: Copy website files Replace the service binary Install updated components in the GAC Configure IIS Update database schema After some research it seems that, to reduce deployment time and to be able to let the other sysadmins handle deployment, I want to deploy all of these as an MSI, except that I don't know a thing about installers. I know VS can generate web deployment projects, but where do I go from there? Being able to simply click Next a few times on an installer is my goal for deploying updates. It would also be nice to modularize it, so for example, I could distribute the four websites among multiple servers and have everything appear as individual components in the installer, and as one entity in Add/Remove Programs. Is all of this too much to ask in a single package?

    Read the article

  • Free utility which runs in Linux to create a UML class diagram from Java source files

    - by DeletedAccount
    I prefer to jot down UML-diagrams on paper and then implement them using Java. It would be nice to have a utility which could create UML-diagrams for me which I may share on-line and include in the digital documentation. In other words: I want to create UML diagrams from Java source code. The utility must be able to: Run in Linux. Handle Generics, i.e show List<Foo correctly in parameters and return type. Show class inheritance and interface implementations. It's nice if the utility is able to: Run in Windows and Mac OS X. Display enums in some nice manner. Generate output in a diagram format which I may modify using some other utility. Run from the command line. Restrict the UML generation to a set of packages which I may specify. Handle classes/interfaces which are not part of my source code. It could include the first class/interface which is external in the UML diagram. Perhaps in another color to indicate it being a library/framework created by someone else. Focuses on this task and doesn't try to solve the whole issue of documentation.

    Read the article

  • symfony doctrine build-sql error

    - by user313571
    I have some big problems with symfony and doctrine at the beginning of a new project. I have created database diagram with mysql workbench, inserted the sql into phpmyadmin and then I've tried symfony doctrine:build-schema to generate the YAML schema. It generates a wrong schema (relations don't have on delete/on update) and after this I've tried symfony doctrine:build --sql and symfony doctrine:insert-sql The insert-sql statement generates error (can't create table ... failing query alter table add constraint ....), so I've decided to take a look over the generated sql and I've found out some differences between the sql generated from mysql workbench (which works perfect, including relations) and the sql generated by doctrine. I'll be short from now: I have to tables, EVENT and FORM and a 1 to n relation (each event may have multiple forms) so the correct constraint (generated with workbench) is ALTER TABLE `form` ADD CONSTRAINT `fk_form_event1` FOREIGN KEY (`event_id`) REFERENCES `event` (`id`) ON DELETE CASCADE ON UPDATE CASCADE; doctrine generated statement is: ALTER TABLE event ADD CONSTRAINT event_id_form_event_id FOREIGN KEY (id) REFERENCES form(event_id); It's totally reversed and I am sure here is the error. What should I do? It's also correct like this?

    Read the article

  • Codeigniter php activerecord orm limit and offset

    - by user2167174
    I am a bit stuck with this problem I have in phpactiverecord. What I am trying to do is a pagination so I need to limit and offset the query results. I am accessing all of the user posts like so: $user-post; How can I query this to limit and offset the results? Thanx in advance. Code: public function office() { if (!$this->session->userdata('username')) { redirect(base_url()); } $data = array(); $data['posts'] = []; $user = User::find('first', array('id' => $this->session->userdata('id'))); if ($user != null) { if ($user->post != null) { foreach ($user->post as $post) { $posts = array($post->name, $post->description, $post->date,'<a href="'.base_url().'Posts/edit/'.$post->id.'">Edit</a> <br /><a href="'.base_url().'Posts/delete/'.$post->id.'">Delete</a>'); array_push($data['posts'], $posts); } $this->table->set_heading('Name', 'Description', 'Date', '<a href="'.base_url().'Posts/create">+Add</a>'); $tmpl = array('table_open' => '<table class="table table-stripped table-bordered user-posts">'); $this->table->set_template($tmpl); $data['table'] = $this->table->generate($data['posts']); } $this->load->view('template/header.php'); $this->load->view('Users/office.php', $data); $this->load->view('template/footer.php'); } else { redirect(base_url()); } }

    Read the article

  • .NET Web Service hydrate custom class

    - by row1
    I am consuming an external C# Web Service method which returns a simple calculation result object like this: [Serializable] public class CalculationResult { public string Name { get; set; } public string Unit { get; set; } public decimal? Value { get; set; } } When I add a Web Reference to this service in my ASP .NET project Visual Studio is kind enough to generate a matching class so I can easily consume and work with it. I am using Castle Windsor and I may want to plug in other method of getting a calculation result object, so I want a common class CalculationResult (or ICalculationResult) in my solution which all my objects can work with, this will always match the object returned from the external Web Service 1:1. Is there anyway I can tell my Web Service client to hydrate a particular class instead of its generated one? I would rather not do it manually: foreach(var fromService in calcuationResultsFromService) { ICalculationResult calculationResult = new CalculationResult() { Name = fromService.Name }; yield return calculationResult; } Edit: I am happy to use a Service Reference type instead of the older Web Reference.

    Read the article

  • Access 2007 DAO VBA Error 3381 causes objects in calling methods to "break".

    - by MT
    ---AFTER FURTHER INVESTIGATION--- "tblABC" in the below example must be a linked table (to another Access database). If "tblABC" is in the same database as the code then the problem does not occur. Hi, We have recently upgraded to Office 2007. We have a method in which we have an open recordset (DAO). We then call another sub (UpdatingSub below) that executes SQL. This method has its own error handler. If error 3381 is encountered then the recordset in the calling method becomes "unset" and we get error 3420 'Object invalid or no longer set'. Other errors in UpdatingSub do not cause the same problem. This code works fine in Access 2003. Private Sub Whatonearth() Dim rs As dao.Recordset set rs = CurrentDb.OpenRecordset("tblLinkedABC") Debug.Print rs.RecordCount UpdatingSub "ALTER TABLE tblTest DROP Column ColumnNotThere" 'Error 3240 occurs on the below line even though err 3381 is trapped in the calling procedure 'This appears to be because error 3381 is encountered when calling UpdatingSub above Debug.Print rs.RecordCount End Sub Private Sub WhatonearthThatWorks() Dim rs As dao.Recordset set rs = CurrentDb.OpenRecordset("tblLinkedABC") Debug.Print rs.RecordCount 'Change the update to generate a different error UpdatingSub "NONSENSE SQL STATEMENT" 'Error is trapped in UpdatingSub. Next line works fine. Debug.Print rs.RecordCount End Sub Private Sub UpdatingSub(strSQL As String) On Error GoTo ErrHandler: CurrentDb.Execute strSQL ErrHandler: 'LogError' End Sub Any thoughts? We are running Office Access 2007 (12.0.6211.1000) SP1 MSO (12.0.6425.1000). Perhaps see if SP2 can be distributed? Sorry about formatting - not sure how to fix that.

    Read the article

  • Different EF Property DataType than Storage Layer Possible?

    - by dj_kyron
    Hi, I am putting together a WCF Data Service for PatientEntities using Entity Framework. My solution needs to address these requirements: Property DateOfBirth of entity Patient is stored in SQL Server as string. It would be ideal if the entity class did not also use the "string" type but rather a DateTime type. (I would expect this to be possible since we're abstracting away from the storage layer). Where could a conversion mechanism be put in place that would convert to and from DateTime/string so that the entity and SQL Server are in sync?. I cannot change the storage layer's structure, so I have to work around it. WCF Data Services (Read-only, so no need for saving changes) need to be used since clients will be able to use LINQ expressions to consume the service. They can generate results based on any given query scenario they need and not be constrained by a single method such as GetPatient(int ID). I've tried to use DTOs, but run into problem of mapping the ObjectContext to a DTO, I don't think that is theoretically possible...or too complicated if it is. I've tried to use Self Tracking Entities but they require the metadata from the .edmx file if I'm correct, and this isn't allowing a different property data type. I also want to add customizations to my Entity getter methods so that a property "MRN" of type "string" needs to have .Replace("MR~", string.Empty) performed before it is returned. I can add this to the getter methods but the problem with that is Entity Framework will overwrite that next time it refreshes the entity classes. Is there a permanent place I can put these? Should I use POCO instead? How would that work with WCF Data Services? Where would the service grab the metadata?

    Read the article

  • Which CMS do I need? Needs to be easy to post a certain kind of post

    - by Vian Esterhuizen
    Hi, I'm creating a site for a video store and it needs to be CMS. I'm doing this for free so I need to use a free CMS like Wordpress, Drupal or Joomla. Do I need a new CMS, a plugin or some PHP of my own? What I need: User accounts Categories Custom post Here's the site as it stands with WP: http://sundancevideo.ca. Right now an experimental site to try to work this out. What I've done now, is created a "Draft" that includes a template table with images and text and so on. The user would then have to copy everything, past into a new post and replace necessary. This really isn't working well. As you may notice by the condition of the posts. What I would prefer is if it was integrated into the WP UI. Like a field for "Description" and field for "Image" where they can upload the images as necessary. This would then generate post, with a table including all the information and images, for as many movies that were added in the UI. Please ask if I'm not being clear. Please help, any suggestions are welcome. Thank you, Vian

    Read the article

  • Connect to a remote Oracle 11g server using OracleClient of .NET 2.0

    - by Raghu M
    I have to connect to a Oracle server on the network using a .NET / C# (Winform) application. I am trying to use System.Data.OracleClient but in vain. Here are the details I can possibly think of (that might help someone reading this question): Platform: Visual Studio 2005 / .NET 2.0 with C# on Windows Vista Home Premium Library: System.Data.OracleClient Server: Oracle 11g (located on the same LAN) Please note that I don't have Oracle installed locally and I have hunted every discussion forum possible for help - but most of them assume local Oracle installation! Here is my connection string: "User Id=TSUSER;Password=ts12TS;Data Source=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=MyServerIP)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ORCL)));" And I get this error: OCIEnvCreate failed with return code -1 but error message text was not available. Stack trace: at System.Data.OracleClient.OciHandle..ctor(OciHandle parentHandle, HTYPE handleType, MODE ocimode, HANDLEFLAG handleflags) at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction(String userName, String password, String serverName, Boolean integratedSecurity, Boolean unicode, Boolean omitOracleConnectionName) at System.Data.OracleClient.OracleInternalConnection..ctor(OracleConnectionString connectionOptions) at System.Data.OracleClient.OracleConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OracleClient.OracleConnection.Open() at DGKit.Util.DataUtil.Generate() in D:\SVNRoot\sandbox\DGDev\Util\DataUtil.cs:line 68

    Read the article

  • Webfaction apache + mod_wsgi + django configuration issue

    - by Dmitry Guyvoronsky
    A problem that I stumbled upon recently, and, even though I solved it, I would like to hear your opinion of what correct/simple/adopted solution would be. I'm developing website using Django + python. When I run it on local machine with "python manage.py runserver", local address is http://127.0.0.1:8000/ by default. However, on production server my app has other url, with path - like "http://server.name/myproj/" I need to generate and use permanent urls. If I'm using {% url view params %}, I'm getting paths that are relative to / , since my urls.py contains this urlpatterns = patterns('', (r'^(\d+)?$', 'myproj.myapp.views.index'), (r'^img/(.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT + '/img' }), (r'^css/(.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT + '/css' }), ) So far, I see 2 solutions: modify urls.py, include '/myproj/' in case of production run use request.build_absolute_uri() for creating link in views.py or pass some variable with 'hostname:port/path' in templates Are there prettier ways to deal with this problem? Thank you. Update: Well, the problem seems to be not in django, but in webfaction way to configure wsgi. Apache configuration for application with URL "hostname.com/myapp" contains the following line WSGIScriptAlias / /home/dreamiurg/webapps/pinfont/myproject.wsgi So, SCRIPT_NAME is empty, and the only solution I see is to get to mod_python or serve my application from root. Any ideas?

    Read the article

  • Error when loading YAML config files in Rails

    - by ZelluX
    I am configuring Rails with MongoDB, and find a strange problem when paring config/mongo.yml file. config/mongo.yml is generated by executing script/rails generate mongo_mapper:config, and it looks like following: defaults: &defaults host: 127.0.0.1 port: 27017 development: <<: *defaults database: tc_web_development test: <<: *defaults database: tc_web_test From the config file we can see the objects development and test should both have a database field. But when it is parsed and loaded in config/initializers/mongo.db, config = YAML::load(File.read(Rails.root.join('config/mongo.yml'))) puts config.inspect MongoMapper.setup(config, Rails.env) the strange thing comes: the output of puts config.inspect is {"defaults"=>{"host"=>"127.0.0.1", "port"=>27017}, "development"=>{"host"=>"127.0.0.1", "port"=>27017}, "test"=>{"host"=>"127.0.0.1", "port"=>27017}} which does not contain database attribute. But when I execute the same statements in a plain ruby console, instead of using rails console, mongo.yml is parsed in a right way. {"defaults"=>{"host"=>"127.0.0.1", "port"=>27017}, "development"=>{"host"=>"127.0.0.1", "port"=>27017, "database"=>"tc_web_development"}, "test"=>{"host"=>"127.0.0.1", "port"=>27017, "database"=>"tc_web_test"}} I am wondering what may be the cause of this problem. Any ideas? Thanks.

    Read the article

  • Convert InputStream to String with encoding given in stream data

    - by Quentin
    Hi, My input is a InputStream which contains an XML document. Encoding used in XML is unknown and it is defined in the first line of XML document. From this InputStream, I want to have all document in a String. To do this, I use a BufferedInputStream to mark the beginning of the file and start reading first line. I read this first line to get encoding and then I use an InputStreamReader to generate a String with the correct encoding. It seems that it is not the best way to achieve this goal because it produces an OutOfMemory error. Any idea, how to do it ? public static String streamToString(final InputStream is) { String result = null; if (is != null) { BufferedInputStream bis = new BufferedInputStream(is); bis.mark(Integer.MAX_VALUE); final StringBuilder stringBuilder = new StringBuilder(); try { // stream reader that handle encoding final InputStreamReader readerForEncoding = new InputStreamReader(bis, "UTF-8"); final BufferedReader bufferedReaderForEncoding = new BufferedReader(readerForEncoding); String encoding = extractEncodingFromStream(bufferedReaderForEncoding); if (encoding == null) { encoding = DEFAULT_ENCODING; } // stream reader that handle encoding bis.reset(); final InputStreamReader readerForContent = new InputStreamReader(bis, encoding); final BufferedReader bufferedReaderForContent = new BufferedReader(readerForContent); String line = bufferedReaderForContent.readLine(); while (line != null) { stringBuilder.append(line); line = bufferedReaderForContent.readLine(); } bufferedReaderForContent.close(); bufferedReaderForEncoding.close(); } catch (IOException e) { // reset string builder stringBuilder.delete(0, stringBuilder.length()); } result = stringBuilder.toString(); }else { result = null; } return result; } Regards, Quentin

    Read the article

  • SQL placeholder in WHERE IN issue, inserted strings fail.

    - by Alastair Pitts
    As part of my jobs, I need to writes SQL queries to connect to our PI database. To generate a query, I need to pass an array of tags (essentially primary keys), but these have to be inserted as strings. As this will be a modular query and used for multiple tags, a placeholder is being used. The query relies upon the use of the WHERE IN statement, which is where the placeholder is, like below: SELECT SUM(value * 5/1000) as "Hourly Flow [kL]" from piarchive..pitotal WHERE tag IN (?) AND time between ? and ? and timestep = '1d' and calcbasis = 'Eventweighted' and value <> '' The issue is the format in which the tags are to be passed in as. If I add them directly into the query (for testing), they go in the format (these are example numbers): '000000012','00000032','005050236','4560236' and the query looks like: SELECT SUM(value * 5/1000) as "Hourly Flow [kL]" from piarchive..pitotal WHERE tag IN ('000000012','00000032','005050236','4560236') AND time between ? and ? and timestep = '1d' and calcbasis = 'Eventweighted' and value <> '' which works. If I try and add the same tags into the placeholder, the query fails. If I only add 1 tag, with no quotes (using the placeholder), the query works. Why is this happening? Is there anyway around it?

    Read the article

  • Display a ranking grid for game : optimization of left outer join and find a player

    - by Jerome C.
    Hello, I want to do a ranking grid. I have a table with different values indexed by a key: Table SimpleValue : key varchar, value int, playerId int I have a player which have several SimpleValue. Table Player: id int, nickname varchar Now imagine these records: SimpleValue: Key value playerId for 1 1 int 2 1 agi 2 1 lvl 5 1 for 6 2 int 3 2 agi 1 2 lvl 4 2 Player: id nickname 1 Bob 2 John I want to display a rank of these players on various SimpleValue. Something like: nickname for lvl Bob 1 5 John 6 4 For the moment I generate an sql query based on which SimpleValue key you want to display and on which SimpleValue key you want to order players. eg: I want to display 'lvl' and 'for' of each player and order them on the 'lvl' The generated query is: SELECT p.nickname as nickname, v1.value as lvl, v2.value as for FROM Player p LEFT OUTER JOIN SimpleValue v1 ON p.id=v1.playerId and v1.key = 'lvl' LEFT OUTER JOIN SimpleValue v2 ON p.id=v2.playerId and v2.key = 'for' ORDER BY v1.value This query runs perfectly. BUT if I want to display 10 different values, it generates 10 'left outer join'. Is there a way to simplify this query ? I've got a second question: Is there a way to display a portion of this ranking. Imagine I've 1000 players and I want to display TOP 10, I use the LIMIT keyword. Now I want to display the rank of the player Bob which is 326/1000 and I want to display 5 rank player above and below (so from 321 to 331 position). How can I achieve it ? thanks.

    Read the article

  • Total row count for pagination using JPA Criteria API

    - by ThinkFloyd
    I am implementing "Advanced Search" kind of functionality for an Entity in my system such that user can search that entity using multiple conditions(eq,ne,gt,lt,like etc) on attributes of this entity. I am using JPA's Criteria API to dynamically generate the Criteria query and then using setFirstResult() & setMaxResults() to support pagination. All was fine till this point but now I want to show total number of results on results grid but I did not see a straight forward way to get total count of Criteria query. This is how my code looks like: CriteriaBuilder builder = em.getCriteriaBuilder(); CriteriaQuery<Brand> cQuery = builder.createQuery(Brand.class); Root<Brand> from = cQuery.from(Brand.class); CriteriaQuery<Brand> select = cQuery.select(from); . . //Created many predicates and added to **Predicate[] pArray** . . select.where(pArray); // Added orderBy clause TypedQuery typedQuery = em.createQuery(select); typedQuery.setFirstResult(startIndex); typedQuery.setMaxResults(pageSize); List resultList = typedQuery.getResultList(); My result set could be big so I don't want to load my entities for count query, so tell me efficient way to get total count like rowCount() method on Criteria (I think its there in Hibernate's Criteria).

    Read the article

  • CKEditor doesn't apply inline styles to links

    - by jomanlk
    I'm using ckeditor version 3 as a text editor to create markup to be sent through email. This means that I have to have all the styles inline and anything that needs to be styled will definitely need the style applied. I'm currently using addStylesSet to generate custom styles that can be applied to elements. The problem I have is that although this works on most elements, styles don't seem to be applied to <a> <ol> <ul> and <li> I really need to be able to apply inline styles to these elements as well. I've been looking at the examples on the ckeditor site, but even they just seem to be wrapping a <span> around the link. Is there anyway I can apply inline styles to <a> tags or failing that, can I just give ckeditor a bunch of classes that can be applied to any tag (Like TinyMCE does with it's link to an external css file)? so that I can at least do a textreplace on them to get the styles inline? I haven't pasted any code here because it's exactly the same as what's been done on the ckeditor site.

    Read the article

  • Python - Things one MUST avoid

    - by Anurag Uniyal
    Today I was bitten again by "Mutable default arguments" after many years. I usually don't use mutable default arguments unless needed but I think with time I forgot about that, and today in the application I added tocElements=[] in a pdf generation function's argument list and now 'Table of Content' gets longer and longer after each invocation of "generate pdf" :) My question is what other things should I add to my list of things to MUST avoid? 1 Mutable default arguments 2 import modules always same way e.g. 'from y import x' and 'import x' are totally different things actually they are treated as different modules see http://stackoverflow.com/questions/1459236/module-reimported-if-imported-from-different-path 3 Do not use range in place of lists because range() will become an iterator anyway, so things like this will fail, so wrap it by list myIndexList = [0,1,3] isListSorted = myIndexList == range(3) # will fail in 3.0 isListSorted = myIndexList == list(range(3)) # will not same thing can be mistakenly done with xrange e.g myIndexList == xrange(3). 4 Catching multiple exceptions try: raise KeyError("hmm bug") except KeyError,TypeError: print TypeError It prints "hmm bug", though it is not a bug, it looks like we are catching exceptions of type KeyError,TypeError but instead we are catching KeyError only as variable TypeError, instead use try: raise KeyError("hmm bug") except (KeyError,TypeError): print TypeError

    Read the article

  • GWT: Populating a page from datastore using RPC is too slow

    - by Ilya Boyandin
    Is there a way to speed up the population of a page with GWT's UI elements which are generated from data loaded from the datastore? Can I avoid making the unnecessary RPC call when the page is loaded? More details about the problem I am experiencing: There is a page on which I generate a table with names and buttons for a list of entities loaded from the datastore. There is an EntryPoint for the page and in its onModuleLoad() I do something like this: final FlexTable table = new FlexTable(); rpcAsyncService.getAllCandidates(new AsyncCallback<List<Candidate>>() { public void onSuccess(List<Candidate> candidates) { int row = 0; for (Candidate person : candidates) { table.setText(row, 0, person.getName()); table.setWidget(row, 1, new ToggleButton("Yes")); table.setWidget(row, 2, new ToggleButton("No")); row++; } } ... }); This works, but takes more than 30 seconds to load the page with buttons for 300 candidates. This is unacceptable. The app is running on Google App Engine and using the app engine's datastore.

    Read the article

  • Using C++ DLL in C# project

    - by Frank
    Hello, I got a C++ dll which has to be integrated in a C# project. I think I found the correct way to do it, but calling the dll gives me this error: System.BadImageFormatException: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B) This is the function in the dll: extern long FAR PASCAL convert (LPSTR filename); And this is the code I'm using in C# namespace Test{ public partial class Form1 : Form { [DllImport("convert.dll", SetLastError = true)] static extern Int32 convert([MarshalAs(UnmanagedType.LPStr)] string filename); private void button1_Click(object sender, EventArgs e) { // generate textfile string filename = "testfile.txt"; StreamWriter sw = new StreamWriter(filename); sw.WriteLine("line1"); sw.WriteLine("line2"); sw.Close(); // add checksum Int32 ret = 0; try { ret = convert(filename); Console.WriteLine("Result of DLL: {0}", ret.ToString()); } catch (Exception ex) { lbl.Text = ex.ToString(); } } }} Any ideas on how to proceed with this? Thanks a lot, Frank

    Read the article

  • How to implement a download for dynamic files in asp.net with masterpages

    - by Tim
    Hello, the title says it all. I have seen some similar questions on SO like this or this, but either i have overlooked something or my requirement is different, neither works. My situation is following: i have a Masterpage one of its contentpage is called MasterData.aspx MasterData has an asp.net ajax tabcontainer control with one usercontrol in every tabpanel these usercontrols(f.e. MD_Customer.ascx)hold the main content(like a normal page) they all have GridViews in it and i want to provide an Excel-Export-Button What i've tried is is to use an iframe like here. But the function that adds the iframe to the document gets never called and therefore i never see the save-as-dialog. Maybe this is caused by using a MasterPage. Does somebody has an idea on how to provide a button in an UpdatePanel that causes an async postback, so that i can generate a CSV dynamically in codebehind and write it to the response? Thank you in advance. aspx-markup: <asp:UpdatePanel ID="UpdGridInfo" runat="server" > <ContentTemplate> <asp:Label ID="LblInfo" Font-Underline="false" runat="server" CssClass="content" ></asp:Label>&nbsp;&nbsp; <asp:ImageButton ToolTip="export to Excel" style="vertical-align:bottom" ID="BtnExcelExport" ImageUrl="~/images/excel2007logo.png" runat="server" /> </ContentTemplate> </asp:UpdatePanel> and the BtnExportExcel codebehind handler(of course it cannot work to write the csv to the response of this page): Private Sub BtnExcelExport_Click(ByVal sender As Object, ByVal e As System.Web.UI.ImageClickEventArgs) Handles BtnExcelExport.Click Dim csv As String = tableToCsv(DirectCast(Me.GridSource, DataTable)) Response.AddHeader("Content-disposition", "attachment; filename=RuleConfigurationFile.csv") Response.ContentType = "application/octet-stream" Response.Write(csv) Response.End() End Sub

    Read the article

  • Associate "Code/Properties/Stuff" with Fields in C# without reflection. I am too indoctrinated by J

    - by AlexH
    I am building a library to automatically create forms for Objects in the project that I am working on. The codebase is in C#, and essentially we have a HUGE number of different objects to store information about different things. If I send these objects to the client side as JSON, it is easy enough to programatically inspect them to generate a form for all of the properties. The problem is that I want to be able to create a simple way of enforcing permissions and doing validation on the client side. It needs to be done on a field by field level. In javascript I would do this by creating a parallel object structure, which had some sort of { permissions : "someLevel", validator : someFunction } object at the nodes. With empty nodes implying free permissions and universal validation. This would let me simply iterate over the new object and the permissions object, run the check, and deal with the result. Because I am overfamilar with the hammer that is javascript, this is really the only way that I can see to deal with this problem. My first implementation thus uses reflection to let me treat objects as dictionaries, that can be programatically iterated over, and then I just have dictionaries of dictionaries of PermissionRule objects which can be compared with. Very javascripty. Very awkward. Is there some better way that I can do this? Essentially a way to associate a data set with each property, and then iterate over those properties. Or else am I Doing It Wrong?

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >