Search Results

Search found 8893 results on 356 pages for 'stored'.

Page 295/356 | < Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >

  • Storing date/times as UTC in database

    - by James
    I am storing date/times in the database as UTC and computing them inside my application back to local time based on the specific timezone. Say for example I have the following date/time: 01/04/2010 00:00 Say it is for a country e.g. UK which observes DST (Daylight Savings Time) and at this particular time we are in daylight savings. When I convert this date to UTC and store it in the database it is actually stored as: 31/03/2010 23:00 As the date would be adjusted -1 hours for DST. This works fine when your observing DST at time of submission. However, what happens when the clock is adjusted back? When I pull that date from the database and convert it to local time that particular datetime would be seen as 31/03/2009 23:00 when in reality it was processed as 01/04/2010 00:00. Correct me if I am wrong but isn't this a bit of a flaw when storing times as UTC? Example of Timezone conversion Basically what I am doing is storing the date/times of when information is being submitted to my system in order to allow users to do a range report. Here is how I am storing the date/times: public DateTime LocalDateTime(string timeZoneId) { var tzi = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); return TimeZoneInfo.ConvertTimeFromUtc(DateTime.UtcNow, tzi).ToLocalTime(); } Storing as UTC: var localDateTime = LocalDateTime("AUS Eastern Standard Time"); WriteToDB(localDateTime.ToUniversalTime());

    Read the article

  • Random-access archive for Unix use

    - by tylerl
    I'm looking for a good format for archiving entire file-systems of old Linux computers. TAR.GZ The tar.gz format is great for archiving files with UNIX-style attributes, but since the compression is applied across the entire archive, the design precludes random-access. Instead, if you want to access a file at the end of the archive, you have to start at the beginning and decompress the whole file (which could be several hundred GB) up to the point where you find the entry you're looking for. ZIP Conversely, one selling point of the ZIP format is that it stores an index of the archive: filenames are stored separately with pointers to the location within the archive were to find the data. If I want to extract a file at the end, I look up the position of that file by name, seek to the location, and extract the data. However, it doesn't store file attributes such as ownership, permissions, symbolic links, etc. Other options? I've tried using squashfs, but it's not really designed for this purpose. The file format is not consistent between versions, and building the archive takes a lot of time and space. What other options might suit this purpose better?

    Read the article

  • Request-local storage in ASP.NET (accessible to the code from IHttpModule implementation)

    - by IgorK
    I need to have some object hanging around between two events I'm interested in: PreRequestHandlerExecute (where I create an instance of my object and want to save it) and PostRequestHandlerExecute (where I want to get to the object). After the second event the object is not needed for my purposes and should be discarded either by storage or my explicit action. So the ideal context where my object should be stored is per request (with guaranteed no sharing issues when different threads are serving requests... or processes/servers :) ) Take into account that actual implementation I can do is being made from a HttpModule and is supposed to be a pluggable solution for already written web apps (so the option to provide some state using static/instance variables in Global.asax doesn't look good - I will have to modify Global.asax on every web application). Cache seems to be too broad for this use. I tried to see whether httpContext.Application (of type HttpApplicationState) is good for me or not, but cannot get whether it is exactly per HttpApplication instance or not (AFAIK you can have several instances of HttpApplications used on different threads and therefore serving several requests simultaneously - then using storage shared between threads will not work correctly; otherwise I would use it because one HttpApplication instance serves exactly one request at a time). Something could be done with storing state on the HttpModule instances if I know for sure that it's exactly bound 1-to-1 with every HttpApplication instance running (but again I need a proof that HttpApplication instance is 1-to-1 with my HttpModule's instance). Any valuable and reputable links on these topics are much appreciated... Would be great to find something particularly well-suited for per request situation (because otherwise I may end up with something ulgy... probably either some 'broader' scoped storage and some hacks to have different keys in the storage for different requests, OR using a thread-local thing and in this way commit to the theory that IIS/ASP.NET will not ever serve first event from one thread and the second event from the other thread and so on)

    Read the article

  • Creating a complex tree model in Qt

    - by Zeke
    I'm writing an IRC Client (yes another one). Long story short. I'm writing a Server dialogue that keeps a list of this: Identity Networks Channels Addresses I have 3 different list views that will be for the Networks, Channels and Addresses. When the user changes the Identity (combo box). The network listview will lookup all the networks for that specific Identity. After it loads up the Networks it will automatically select the first network and then load all the channels and addresses for that specific network. The problem is I want to have 3 views for 1 model, to minimise all the memory and the loading of data. So that it makes it much easier to manage and not do a bunch of work. If you'd look at QColumnView it's the same exact thing. But I don't need it to be on one exact page since the views are on entirely different tabs to make it easier to go through the Server dialogue. I'm wondering what will be the best way to go about handling this complexity. The information is stored in a SQLite database. I already have the classes written to extract and store it. Just the modelling is the painful part of this solution.

    Read the article

  • Subset and lagging list data structure R

    - by user1234440
    I have a list that is indexed like the following: >list.stuff [[1]] [[1]]$vector ... [[1]]$matrix .... [[1]]$vector [[2]] null [[3]] [[3]]$vector ... [[3]]$matrix .... [[3]]$vector . . . Each segment in the list is indexed according to another vector of indexes: >index.list 1, 3, 5, 10, 15 In list.stuff, only at each of the indexes 1,3,5,10,15 will there be 2 vectors and one matrix; everything else will be null like [[2]]. What I want to do is to lag like the lag.xts function so that whatever is stored in [[1]] will be pushed to [[3]] and the last one drops off. This also requires subsetting the list, if its possible. I was wondering if there exists some functions that handle list manipulation. My thinking is that for xts, a time series can be extracted based on an index you supply: xts.object[index,] #returns the rows 1,3,5,10,15 From here I can lag it with: lag.xts(xts.object[index,]) Any help would be appreciated thanks: EDIT: Here is a reproducible example: list.stuff<-list() vec<-c(1,2,3,4,5,6,7,8,9) vec2<-c(1,2,3,4,5,6,7,8,9) mat<-matrix(c(1,2,3,4,5,6,7,8),4,2) list.vec.mat<-list(vec=vec,mat=mat,vec2=vec2) ind<-c(2,4,6,8,10) for(i in ind){ list.stuff[[i]]<-list.vec.mat }

    Read the article

  • Referencing not-yet-defined variables - Java

    - by user2537337
    Because I'm tired of solving math problems, I decided to try something more engaging with my very rusty (and even without the rust, very basic) Java skills. I landed on a super-simple people simulator, and thus far have been having a grand time working through the various steps of getting it to function. Currently, it generates an array of people-class objects and runs a for loop to cycle through a set of actions that alter the relationships between them, which I have stored in a 2d integer array. When it ends, I go look at how much they all hate each other. Fun stuff. Trouble has arisen, however, because I would like the program to clearly print what action is happening when it happens. I thought the best way to do this would be to add a string, description, to my "action" class (which stores variables for the actor, reactor, and the amount the relationship changes). This works to a degree, in that I can print a generic message ("A fight has occurred!") with no problem. However, ideally I would like it to be a little more specific ("Person A has thrown a rock at Person B's head!"). This latter goal is proving more difficult: attempting to construct an action with a description string that references actor and reactor gets me a big old error, "Cannot reference field before it is defined." Which makes perfect sense. I believe I'm not quite in programmer mode, because the only other way I can think to do this is an unwieldy switch statement that negates the need for each action to have its own nicely-packaged description. And there must be a neater way. I am not looking for examples of code, only a push in the direction of the right concept to handle this.

    Read the article

  • Why aren't my MySQL Group By Hours vs Half Hours files Not displaying same data?

    - by stogdilla
    I need to be able to display data that I have in 15 minute increments in different display types. I have two queries that are giving me trouble. One shows data by half an hour, the other shows data by hour. The only issue is that the data totals change between queries. It's not counting the data that happens between the time frames, only AT the time frames. Ex: There are 5 things that happen at 7:15am. 2 that happen at 7:30am and 4 that show at 7:00am. The 15 minute view displays all of the data. The half hour view displays the data from 7:00am and from 7:30am but ignores the 7:15am. The hour display only shows the 7:00am data Here are my queries: $query="SELECT * FROM data WHERE startDate='$startDate' and queue='$queue' GROUP BY HOUR(start),floor(minute(start)/30)"; and $query="SELECT * FROM data WHERE startDate='$startDate' and queue='$queue' GROUP BY HOUR(start) "; How can I pull out the data in groups like I have but get all the data included? Is the issue the way the data is stored in the mysql table? Currently I have a column with dates (2010-03-29) and a column with times (00:00) Do I need to convert these into something else?

    Read the article

  • Start developing a database application using Oracle + Net Beans

    - by Ranhiru
    I have thought of creating my first database application for one of my projects using Oracle and Java. I have chosen Netbeans as my development environment. I have a few questions to getting started. Please bare with me as I'm a complete beginner to Oracle + Netbeans This will be a data intensive (yet still for a college project) database application. I do not need 1000 user concurrency or any other very advanced features but basic stuff such as triggers, stored procedures etc. Will the 11g "Express" (XE) suffice for my requirements? Do i need any Java to Oracle bridge (database connectivity driver eg. ODBC etc) for Netbeans to connect to the oracle database? If yes, what are they? Does Netbeans support Oracle databases natively? Any easy to follow guide on how do i connect to the database and insert/retrieve/display data on a J2SE application? (I know that i should Google this but if there's any guide previously followed by anyone and is considered easy, it would be greatly appreciated.)

    Read the article

  • Some specific questions about object oriented and MVC design.

    - by Samn
    I have two objects, Users and Mail. Users create Mail objects and send them to other users. If I wanted to get all mail for a User, I could create a method like GetMail() that would return an array of Mail objects owned by that User. But if I wanted to get all mail across the system, what "type" of object would be responsible for that? To solve this problem, I usually create a Manager, which is an object responsible for dealing with a collection of a particular type of object. MailManager deals with collections of Mail objects. GetMailForUser() is one method, GetAllMail() is another method. The User objects invokes the MailManager and executes GetMailForUser(me). Is this stupid? When a user executes the controller CreateMail, a new instance of the Mail object is created. The Mail object, seeing it is creating a new Mail of type 'sent', decides to go ahead and create a second Mail object for the recipient, of type 'received'. Creating one Mail object triggers the creation of a second Mail object. Is this stupid? Should the controller have created both Mail objects, or just the first 'sent' one? When two Users are friends, the association is stored in a table of Relationships. I use a simple object for Relationships. A RelationshipManager has a method called GetFriendsForUser(). The User object has a method GetFriends(), which invokes the RelationshipManager. Is this stupid?

    Read the article

  • HTTP Compression problems on IIS7

    - by Jonathan Wood
    I've spent quite a bit of time on this but seem to be going nowhere. I have a large page that I really want to speed up. The obvious place to start seems to be HTTP compression, but I just can't seem to get it to work for me. After considerable searching, I've tried several variations of the code below. It kind of works, but after refreshing the browser, the results seem to fall apart. They were turning to garbage when the page used caching. If I turn off caching, then the page seems right but I lose my CSS formatting (stored in a separate file) and get an error that an included JS file contains invalid characters. Most of the resources I've found on the Web were either very old or focused on accessing IIS directly. My page is running on a shared hosting account and I do not have direct access to IIS7, which it's running on. protected void Application_BeginRequest(object sender, EventArgs e) { // Implement HTTP compression if (Request["HTTP_X_MICROSOFTAJAX"] == null) // Avoid compressing AJAX calls { // Retrieve accepted encodings string encodings = Request.Headers.Get("Accept-Encoding"); if (encodings != null) { // Verify support for or gzip (deflate takes preference) encodings = encodings.ToLower(); if (encodings.Contains("gzip") || encodings == "*") { Response.Filter = new GZipStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "gzip"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } else if (encodings.Contains("deflate")) { Response.Filter = new DeflateStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "deflate"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } } } } Is anyone having better success with this?

    Read the article

  • Two entities with @ManyToOne should join the same table

    - by Ivan Yatskevich
    I have the following entities Student @Entity public class Student implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; //getter and setter for id } Teacher @Entity public class Teacher implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; //getter and setter for id } Task @Entity public class Task implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @ManyToOne(optional = false) @JoinTable(name = "student_task", inverseJoinColumns = { @JoinColumn(name = "student_id") }) private Student author; @ManyToOne(optional = false) @JoinTable(name = "student_task", inverseJoinColumns = { @JoinColumn(name = "teacher_id") }) private Teacher curator; //getters and setters } Consider that author and curator are already stored in DB and both are in the attached state. I'm trying to persist my Task: Task task = new Task(); task.setAuthor(author); task.setCurator(curator); entityManager.persist(task); Hibernate executes the following SQL: insert into student_task (teacher_id, id) values (?, ?) which, of course, leads to null value in column "student_id" violates not-null constraint Can anyone explain this issue and possible ways to resolve it?

    Read the article

  • How does one implement storage/retrieval of smart-search/mailbox features?

    - by humble_coder
    Hi All, I have a question regarding implementation of smart-search features. For example, consider something like "smart mailboxes" in various email applications. Let's assume you have your data (emails) stored in a database and, depending on the field for which the query will be created, you present different options to the end user. At the moment let's assume the Subject, Verb, Object approach… For instance, say you have the following: SUBJECTs: message, to_address, from_address, subject, date_received VERBs: contains, does_not_contain, is_equal_to, greater_than, less_than OBJECTs: ??????? Now, in case it isn't clear, I want a table structure (although I'm not opposed to an external XMLesque file of some sort) to store (and later retrieve/present) my criteria for smart searches/mailboxes for later use. As an example, using SVO I could easily store then reconstruct a query for "date between two dates" -- simply use "date greater than" AND "date less than". However, what if, in the same smart search, I wanted a "between" OR'ed with another criterion? You can see that it might get out of hand -- not necessarily in the query creation (as that is rather simplistic), but in the option presentation and storage mechanism. Perhaps I need to think more on a more granular level. Perhaps I need to simply allow the user to select AND or OR for each entry independently instead of making it an ALL OR NOTHING type smart search (i.e. instead of MATCH ALL or MATCH ANY, I need to simply allow them to select -- I just don't want it to turn into a Hydra). Any input would be most appreciated. My apologies if the question is a bit incoherent. It is late, and I my brain is toast. Best.

    Read the article

  • Storing Credit Card Numbers in SESSION - ways around it?

    - by JM4
    I am well aware of PCI Compliance so don't need an earful about storing CC numbers (and especially CVV nums) within our company database during checkout process. However, I want to be safe as possible when handling sensitive consumer information and am curious how to get around passing CC numbers from page to page WITHOUT using SESSION variables if at all possible. My site is built in this way: Step 1) collect Credit Card information from customer - when customer hits submit, the information is first run through JS validation, then run through PHP validation, if all passes he moves to step 2. Step 2) Information is displayed on a review page for customer to make sure the details of their upcoming transaction are shown. Only the first 6 and last 4 of the CC are shown on this page but card type, and exp date are shwon fully. If he clicks proceed, Step 3) The information is sent to another php page which runs one last validation, sends information through secure payment gateway, and string is returned with details. Step 4) If all is good and well, the consumer information (personal, not CC) is stored in DB and redirected to a completion page. If anything is bad, he is informed and told to revisit the CC processing page to try again (max of 3 times). Any suggestions?

    Read the article

  • Java HashMap containsKey always false

    - by Dennis
    I have the funny situation, that I store a Coordinate into a HashMap<Coordinate, GUIGameField>. Now, the strange thing about it is, that I have a fragment of code, which should guard, that no coordinate should be used twice. But if I debug this code: if (mapForLevel.containsKey(coord)) { throw new IllegalStateException("This coordinate is already used!"); } else { ...do stuff... } ... the containsKey always returns false, although I stored a coordinate with a hashcode of 9731 into the map and the current coord also has the hashcode 9731. After that, the mapForLevel.entrySet() looks like: (java.util.HashMap$EntrySet) [(270,90)=gui.GUIGameField@29e357, (270,90)=gui.GUIGameField@ca470] What could I have possibly done wrong? I ran out of ideas. Thanks for any help! public class Coordinate { int xCoord; int yCoord; public Coordinate(int x, int y) { ...store params in attributes... } ...getters & setters... @Override public int hashCode() { int hash = 1; hash = hash * 41 + this.xCoord; hash = hash * 31 + this.yCoord; return hash; } }

    Read the article

  • confusion about using types instead of gtts in oracle

    - by Omnipresent
    I am trying to convert queries like below to types so that I won't have to use GTT: insert into my_gtt_table_1 (house, lname, fname, MI, fullname, dob) (select house, lname, fname, MI, fullname, dob from (select 'REG' house, mbr_last_name lname, mbr_first_name fname, mbr_mi MI, mbr_first_name || mbr_mi || mbr_last_name fullname, mbr_dob dob from table_1 a, table_b where a.head = b.head and mbr_number = '01' and mbr_last_name = v_last_name) c above is just a sample but complex queries are bigger than this. the above is inside a stored procedure. So to avoid the gtt (my_gtt_table_1). I did the following: create or replace type lname_row as object ( house varchar2(30) lname varchar2(30), fname varchar2(30), MI char(1), fullname VARCHAR2(63), dob DATE ) create or replace type lname_exact as table of lname_row Now in the SP: type lname_exact is table of <what_table_should_i_put_here>%rowtype; tab_a_recs lname_exact; In the above I am not sure what table to put as my query has nested subqueries. query in the SP: (I am trying this for sample purpose to see if it works) select lname_row('', '', '', '', '', '', sysdate) bulk collect into tab_a_recs from table_1; I am getting errors like : ORA-00913: too many values I am really confused and stuck with this :(

    Read the article

  • How to count the Chinese word in a file using regex in perl?

    - by Ivan
    I tried following perl code to count the Chinese word of a file, it seems working but not get the right thing. Any help is greatly appreciated. The Error message is Use of uninitialized value $valid in concatenation (.) or string at word_counting.pl line 21, <FILE> line 21. Total things = 125, valid words = which seems to me the problem is the file format. The "total thing" is 125 that is the string number (125 lines). The strangest part is my console displayed all the individual Chinese words correctly without any problem. The utf-8 pragma is installed. #!/usr/bin/perl -w use strict; use utf8; use Encode qw(encode); use Encode::HanExtra; my $input_file = "sample_file.txt"; my ($total, $valid); my %count; open (FILE, "< $input_file") or die "Can't open $input_file: $!"; while (<FILE>) { foreach (split) { #break $_ into words, assign each to $_ in turn $total++; next if /\W|^\d+/; #strange words skip the remainder of the loop $valid++; $count{$_}++; # count each separate word stored in a hash ## next comes here ## } } print "Total things = $total, valid words = $valid\n"; foreach my $word (sort keys %count) { print "$word \t was seen \t $count{$word} \t times.\n"; } ##---Data---- sample_file.txt ??????,???????,????.??????.????:"?????????????,??????,????????.????????,?????????, ???????????.????????,???????????,??????,??????.???:`??,???????????.'?????, ??????????."??????,??????.????.???, ????????????,????,??????,?????????,??????????????. ????????,??????,???????????,????????,????????.????,????,???????, ??????????,??????,????????.??????.

    Read the article

  • Actually XSLT Lookup (Store variables during loop and use in it another template)

    - by krisvandenbergh
    Is there a way to store a variable/param during a for-each loop in a sort of array, and use it in another template, namely <xsl:template match="Foundation.Core.Classifier.feature">. All the classname values that appear during the for-each should be stored. How would you implement that in XSLT? Here's my current code. <xsl:for-each select="Foundation.Core.Class"> <xsl:for-each select="Foundation.Core.ModelElement.name"> <xsl:param name="classname"> <xsl:value-of select="Foundation.Core.ModelElement.name"/> </xsl:param> </xsl:for-each> <xsl:apply-templates select="Foundation.Core.Classifier.feature" /> </xsl:for-each> Here's the template in which the classname parameters should be used. <xsl:template match="Foundation.Core.Classifier.feature"> <xsl:for-each select="Foundation.Core.Attribute"> <owl:DatatypeProperty rdf:ID="{Foundation.Core.ModelElement.name}"> <rdfs:domain rdf:resource="$classname" /> </owl:DatatypeProperty> </xsl:for-each> </xsl:template> The input file can be found at http://krisvandenbergh.be/uml_pricing.xml

    Read the article

  • mysql: storing arbitrary data

    - by Hailwood
    Background: I was asking a question on stack overflow regarding creating tables on the fly where this conversation ensued: This smells like a terrible idea! In fact, it smells just like this one. What in the world do you want to use this for? – deceze @deceze: very true, However, How else would you store the contents of these CSV files. They must be stored in mysql for indexing. The only solid fact about them is that they all have a mobile column with a standard format. The CSV can have an arbitrary amount of columns with an arbitrary amount of rows. They can (with no exaggeration) range from a single row, 35 column csv to an 80k row single column CSV. I am open to other ideas. – Hailwood There are many solutions for this, from attribute-value schemas to JSON storage and NoSQL storage. Open a new question about it. Whatever you do though, don't dynamically create tables! – deceze Question: So my question is, What would you say is the best way to store this data? Are you in agreement with deceze about not creating dynamic tables?

    Read the article

  • Why do Scala maps have poor performance relative to Java?

    - by Mike Hanafey
    I am working on a Scala app that consumes large amounts of CPU time, so performance matters. The prototype of the system was written in Python, and performance was unacceptable. The application does a lot with inserting and manipulating data in maps. Rex Kerr's Thyme was used to look at the performance of updating and retrieving data from maps. Basically "n" random Ints were stored in maps, and retrieved from the maps, with the time relative to java.util.HashMap used as a reference. The full results for a range of "n" are here. Sample (n=100,000) performance relative to java, smaller is worse: Update Read Mutable 16.06% 76.51% Immutable 31.30% 20.68% I do not understand why the scala immutable map beats the scala mutable map in update performance. Using the sizeHint on the mutable map does not help (it appears to be ignored in the tested implementation, 2.10.3). Even more surprisingly the immutable read performance is worse than the mutable read performance, more significantly so with larger maps. The update performance of the scala mutable map is surprisingly bad, relative to both scala immutable and plain Java. What is the explanation?

    Read the article

  • Storing and loading a subtitle of an annotation with NSUserDefualts

    - by Krismutt
    Happy new year everybody! Basically, what I try to do is to save a subtitle to an annotation when the application terminates. When the application starts again I want this stored subtitle to show up in the callout of the annotation again. What am I doing wrong? I just can't get it to work... storeLocation.m - (void)setCoordinate:(CLLocationCoordinate2D)coor { MKReverseGeocoder *geocoder = [[[MKReverseGeocoder alloc] initWithCoordinate:coor] autorelease]; geocoder.delegate = self; coordinate = coor; [geocoder start]; NSUserDefaults *userDef = [NSUserDefaults standardUserDefaults]; [userDef setValue:subtitle forKey:@"SavedAddress"]; [userDef synchronize]; NSLog(@"Sparade adress"); } mainViewController.m -(void)viewDidLoad { NSString *savedAddress = [[NSUserDefaults standardUserDefaults] objectForKey:@"SavedAddress"]; if (savedAddress) { // what code should I add here // so that the subtitle shows up // when the application launches? } } Would really appreciate some help with this one... Thanks in advance!

    Read the article

  • Returning data from SQL Server reporting web service call

    - by user79339
    Hi, I am generating a report that contains the version number. The version number is stored in the DB and retrieved/incremented as part of the report generation. The only problem is, I am calling SSRS via a web service call, which returns the generated report as a byte array. Is there any way to get the version number out of this generated report? For example to display a dialog that says "You generated Status Report, Version number 3". I tried passing in an output parameter and setting it inside the storedproc. Its modified when i execute it in sql management studio, but not in the reporting studio. Or atleast i can't seem to bind to the modified, post execution value (using expression "=Parameters!ReportVersion.Value"). Of course, I could get/increment the version number from database myself before calling the SSRS webservice and pass it along as a parameter to the Report, but that might cause concurrancy problems. On the whole, it just seems neater for the storedproc to access/generate a version number and return it to the ReportingEngine, which will generate the report with the version number and return the updated version number to the WebService client. Any thoughts/Ideas?

    Read the article

  • Details View and integration with TinyMCE <%@ Page validateRequest="false" %>

    - by GibboK
    I use TinyMCE in a DetailView in in EDIT MODE. I would like to know if there is a solution which can prevent Request Validation to trigger an error WITHOUT USING <%@ Page validateRequest="false" %> for my page. The only way I found out at the moment is to encode TextBox used by TinyMCE using option: "xml" tinyMCE.init({ encoding: "xml", In this way Request Validation does not trigger error but at the time to read the data in the TextBox the result it is Encoded. I also tried to Decode on PageLoad the content of the TextBox using this code myTextBox.Text = HttpUtility.HtmlDecode(myTextBox.Text) But the result is not as expected, so I can visualize it just Encoded text. Any Ideas? Thanks UPDATE I found a solution to my problem. I added in _DataBound event for the DetailsView this code TextBox myContentAuthor = (TextBox)uxAuthorListDetailsView.FindControl("uxContentAuthorInput"); myContentAuthor.Text = HttpUtility.HtmlDecode(myContentAuthor.Text); So on DataBound event, (should work even on post back) the content will be decodene for textbox tinymce. Here how should work: 01 - TinyMCE ESCAPE data inserted in textbox using function encoding: "xml", 02 - Data has been stored as ESCAPED 03 - To read the data and add its content to a TextBox where apply TinyMCE use in DATABOUND EVENT for DetailView and HttpUtility.HtmlDecode (so it will look decoded) 04 - You can modify content in the textbox in edit mode. On post back TinyMCE will encoded again using encoding: "xml" an so on Hope guys can help some one else. But please give me your comment on this solution thanks! Mybe you come up with more elegant solution! :-)

    Read the article

  • Calculating and saving space in Postgresql

    - by punkish
    I have a table in Pg like so CREATE TABLE t ( a BIGSERIAL NOT NULL, -- 8 b b SMALLINT, -- 2 b c SMALLINT, -- 2 b d REAL, -- 4 b e REAL, -- 4 b f REAL, -- 4 b g INTEGER, -- 4 b h REAL, -- 4 b i REAL, -- 4 b j SMALLINT, -- 2 b k INTEGER, -- 4 b l INTEGER, -- 4 b m REAL, -- 4 b CONSTRAINT a_pkey PRIMARY KEY (a) ) The above adds up to 50 bytes per row. My experience is that I need another 40% to 50% for system overhead, without even any user-created indexes to the above. So, about 75 bytes per row. I will have many, many rows in the table, potentially upward of 145 billion rows, so the table is going to be pushing 13-14 Terabytes. What tricks, if any, could I use to compact this table? My possible ideas below -- Convert the REAL values to INTEGERs. If they can stored as SMALLINT, that is a saving of 2 bytes per field. Convert the columns b .. m into an array. I don't need to search on those columns, but I do need to be able to return one column's value at a time. So, if I need column g, I could do something like SELECT a, arr[5] FROM t; Would I save space with the array option? Would there be a speed penalty? Any other ideas?

    Read the article

  • PHP does not store all variables in session

    - by Flurin Juvalta
    Dear all I'm assigning session variables by filling the $_SESSION - Array throughout my script. My problem is, that for some reason not all variables are available in the session. here is a shortened version of my code for explaining this issue: session_start(); print_r($_SESSION); $_SESSION['lang'] = 'de'; $_SESSION['location_id'] = 11; $_SESSION['region_id'] = 1; $_SESSION['userid'] = 'eccbc87e4b5ce2fe28308fd9f2a7baf3'; $_SESSION['hash'] = 'dce57f1e3bc6fba32afab93b0c38b662'; print_r($_SESSION); first call prints something like this: Array ( ) Array ( [lang] => de [location_id] => 11 [region_id] => 1 [userid] => eccbc87e4b5ce2fe28308fd9f2a7baf3 [hash] => dce57f1e3bc6fba32afab93b0c38b662 ) the second call prints: Array ( [lang] => de [location_id] => 11 [region_id] => 1 ) Array ( [lang] => de [location_id] => 11 [region_id] => 1 [userid] => eccbc87e4b5ce2fe28308fd9f2a7baf3 [hash] => dce57f1e3bc6fba32afab93b0c38b662 ) As you can see, the important login information is not stored in the session. Does anybody has an idea what could be wrong with my session? Thanks for your answers!

    Read the article

  • MySQL get point in time totals from related tables

    - by batfastad
    Hi everyone We have an order book and invoicing system and I've been tasked with trying to output monthly rolling totals from these tables. But I don't know really where to start with this. I think there's some SQL syntax that I don't even know about yet. I'm familiar with INNER/LEFT/JOINS and GROUP BY etc but grouping by date is confusing since I don't know how to limit the data to only the current date that's being grouped by at that point. I think this will involve joining the tables to themselves or possibly a sub-select. I always thought it best to avoid sub-selects apart from when absolutely necessary. Basically the system has 3 tables orders: order_id, currency, order_stamp orders_lines: order_line_id, invoice_id, order_id, price invoices: invoice_id, invoice_stamp order_stamp and invoice_stamp are UTC unix timestamps stored as integers, not MySQL timestamps. I'm trying to get a listing by year/month showing the total of current unbilled orders (sum of price), at that point in time. Current orders are ones where order_stamp is less than or equal to 00:00 on the 1st of the month. Unbilled orders are ones where invoice_stamp is null or invoice_stamp is greater than 00:00 on the 1st of the month. At that point in time there may not be a related invoice yet and invoice_id might be null. Anyone got any suggestions on what I should join to what and what I need to group by? Cheers, B

    Read the article

< Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >