Search Results

Search found 11573 results on 463 pages for 'store'.

Page 100/463 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Storing PLSQL stored-procedure values in Oracle memory caches for extended periods

    - by Ira Baxter
    I am collecting runtime profiling data from PLSQL stored procedures. The data is collected as certain stored procedures execute, but it needs to accumululate across multiple executions of those procedures. To minimize overhead, I'd like to store that profiling data in some PLSQL-accessable Oracle memory-resident storage somewhere for the duration of the data collection interval, and then dump out the accumulated values. The data collection interval might be seconds or hours; its ok not to store this data across system boots. Something like session state in web servers would do. What are my choices for storing such data? The only method I know about are contexts in dbms_sessions: procedure set_ctx (value in varchar8) as begin dbms_session.set_context ( 'Test_Ctx', 'AccumulatedValue', value, NULL, 'ProfilerSessionId' ); end set_ctx; This works, but takes some 50 milliseconds(!) per update to the accumulated value. What I'm hoping for is a way to access/store an array of values in some Oracle memory using vanilla PLSQL statements, with access times typical of array accesses made to package-local arrays.

    Read the article

  • Persisting complex data between postbacks in ASP.NET MVC

    - by Robert Wagner
    I'm developing an ASP.NET MVC 2 application that connects to some services to do data retrieval and update. The services require that I provide the original entity along with the updated entity when updating data. This is so it can do change tracking and optimistic concurrency. The services cannot be changed. My problem is that I need to somehow store the original entity between postbacks. In WebForms, I would have used ViewState, but from what I have read, that is out for MVC. The original values do not have to be tamper proof as the services treat them as untrusted. The entities would be (max) 1k and it is an intranet app. The options I have come up are: Session - Ruled out - Store the entity in the Session, but I don't like this idea as there are no plans to share session between URL - Ruled out - Data is too big HiddenField - Store the serialized entity in a hidden field, perhaps with encryption/encoding HiddenVersion - The entities have a (SQL) version field on them, which I could put into a hidden field. Then on a save I get "original" entity from the services and compare the versions, doing my own optimistic concurrency. Cookies - Like 3 or 4, but using a cookie instead of a hidden field I'm leaning towards option 4, although 3 would be simpler. Are these valid options or am I going down the wrong track? Is there a better way of doing this?

    Read the article

  • Alternatives to storing Moose object using Apache::Session with CODE references

    - by Hartmut Behrens
    I have a Moose class that i would like to store using Apache::Session::File. However, Apache::Session::File by default will not store it and instead i get the error message: (in cleanup) Can't store CODE items at blib\lib\Storable.pm (autosplit into blib\lib\auto\Storable\_freeze.al)... This problem can be circumvented by setting $Storable::Deparse = 1; $Storable::Eval = 1; in order to allow CODE references to be serialized. The offending method in the Moose class is listed below, which retrieves a column from a mysql database: sub _build_cell_generic { my ($self,$col) = @_; my $sth = $self->call_dbh('prepare','select '.$col.' from '.$self->TABLE.' where CI = ? and LAC = ? and IMPORTDATE = ?'); $sth->execute($self->CI,$self->LAC,$self->IMPORTDATE); my $val = $sth->fetchrow_array; $sth->finish; return defined $val ? $val : undef; } So presumably the dbh object (isa DBIx::Connector) contains CODE references. Is there a better alternative in order to allow serialization of this Moose class than setting $Storable::Deparse and $Storable::Eval ?

    Read the article

  • GAE modeling relationship options

    - by Sway
    Hi there, I need to model the following situation and I can't seem to find a consistent example on how to do it "correctly" for the google app engine. Suppose I've got a simple situation like the following: [Company] 1 ----- M [Stare] A company has one to many stores. Each store has an address made up of a address line 1, city, state, country, postcode etc. Ok. Lets say we need to create say an "Audit". An Audit is for a company and can be across one to many stares. So something like: [Audit] 1 ------ 1 [Company] 1 ------ M [Store] Now we need to query all of the "audits" based on the Store "addresses" in order to send the "Auditors" to the right locations. There seem to be numerous articles like this one: http://code.google.com/appengine/articles/modeling.html Which give examples of creating a "ContactCompany" model class. However they also say that you should use this kind of relationship only when you "really need to" and with "care" for performance. I've also read - frequently - that you should denormalize as much as possible thereby moving all of the "query-able" data into the Audit class. So what would you suggest as the best way to solve this? I've seen that there is an Expando class but I'm not sure if that is the "best" option for this. Any help or thoughts on this would be totally appreciated. Thanks in advance, Matt

    Read the article

  • Out of Core Implementation of a Quadtree

    - by Nima
    Hi, I am trying to build a Quadtree data structure(or let's just say a tree) on the secondary memory(Hard Disk). I have a C++ program to do so and I use fopen to create the files. Also, I am using tesseral coding to store each cell in a file named with its corresponding code to store it on the disk in one directory. The problem is that after creating about 1,100 files, fopen just returns NULL and stops creating new files. I can create further files manually in that directory, but using C++ it can not create any further files. I know about max limit of inode on ext3 filesystem which is (from Wikipedia) 32,000 but mine is way less than that, also note that I can create files manually on the disk; just not through fopen. Also, I really appreciate any idea regarding the best way to store a very dynamic quadtree on disk(I need the nodes to be in separate files and the quadtree might have a depth of 50). Using nested directories is one idea, but I think it will slow down the performance because of following the links on the filesystem to access the file. Thanks, Nima

    Read the article

  • ASP.NET MVC solution to a forms application?

    - by Gloria Huang
    Hello, We're building a survey system and utilising ASP.NET MVC and wondered if anyone can offer suggestions on the architecture. Here's the problem we're trying to solve. Essentially an agency sends out several surveys every year. They're very structured and not like SurveyMonkey style of surveys - they're actually applications of feedback. Much like a Visa Application there are lots of things they need to do and sometimes it takes them 2-3 weeks to fill it out. They can upload files (proofs of purchase etc - PDF/JPG) and also multiple "items". Eg. Say for instance they've worked for McDonalds, there could be 20 different franchises, they build a list of locations they've worked. 3 weeks later there could be another 3 new locations and 2 may have closed down. So we need to ensure the forms are able to handle those situations. The forms themselves (markup and data) change every year - I should mention that this for a taxation/finance/budget system. We were thinking of using MVC, using Xml to store the data (temporarily), XSD to validate the data, XSL to transform the data to presentable markup (for them to fill out) and then once they "Submit" an application it gets stored into the DB in relevant areas. When the user starts the application process, they can save the progress so far (we validate whatever they entred and ignore any they havent), save it as an Xml blob and store in the DB. When they're finally ready to submit it, then we do a full validation and upload the files and store them securely (it has their business proofs and accounting statements) and then run some workflows. What I'm really concerned about is how to manage changing forms versions (a year later). How are form/application systems written these days? We have 2 months to pull this off and about 30 forms to deliver. So 30xXML, 30xXSD, 30xXSL.

    Read the article

  • Python, dictionaries, and chi-square contingency table

    - by rohanbk
    I have a file which contains several lines in the following format (word, time that the word occurred in, and frequency of documents containing the given word within the given instance in time): #inputfile <word, time, frequency> apple, 1, 3 banana, 1, 2 apple, 2, 1 banana, 2, 4 orange, 3, 1 I have Python class below that I used to create 2-D dictionaries to store the above file using as the key, and frequency as the value: class Ddict(dict): ''' 2D dictionary class ''' def __init__(self, default=None): self.default = default def __getitem__(self, key): if not self.has_key(key): self[key] = self.default() return dict.__getitem__(self, key) wordtime=Ddict(dict) # Store each inputfile entry with a <word,time> key timeword=Ddict(dict) # Store each inputfile entry with a <time,word> key # Loop over every line of the inputfile for line in open('inputfile'): word,time,count=line.split(',') # If <word,time> already a key, increment count try: wordtime[word][time]+=count # Otherwise, create the key except KeyError: wordtime[word][time]=count # If <time,word> already a key, increment count try: timeword[time][word]+=count # Otherwise, create the key except KeyError: timeword[time][word]=count The question that I have pertains to calculating certain things while iterating over the entries in this 2D dictionary. For each word 'w' at each time 't', calculate: The number of documents with word 'w' within time 't'. (a) The number of documents without word 'w' within time 't'. (b) The number of documents with word 'w' outside time 't'. (c) The number of documents without word 'w' outside time 't'. (d) Each of the items above represents one of the cells of a chi-square contingency table for each word and time. Can all of these be calculated within a single loop or do they need to be done one at a time? Ideally, I would like the output to be what's below, where a,b,c,d are all the items calculated above: print "%s, %s, %s, %s" %(a,b,c,d)

    Read the article

  • how to implement login and service features as in skype or msn chat in a wpf application

    - by black sensei
    Hello Good people! I'm building an WPF application that connect to web services for its operations.Things that i needed to be working are so far fine.Now i'll like to improve use experience by adding features like username editable combobox, sign me in when skype start and start when computer start. I have a fair idea about each feature but very small knowledge about their implementation. Question 1 username combobox : i use a combobox with isEditable set to true but i think it doesnt have the previous username, would that mean that i have to store every successful login username in a sqlite for example? Question 2 sign me in when skype start : i think about using sqlite after all to store the credentials and store the value (as in true or false) if autologin has to be performed. Question 3 start when computer start : i know it's about having is as service.but the process of using it as a service and removing its service when checkbox is checked or unckecked is a bit confusing to me. Question 4 Please wait(signing in) of skype if i want to do things like please wait at login(login is over webservice) in a WPF application should i use a animated gif in a grid that i can show when hiding the login combobox and passwordbox grid or i should use an animated object(for which i have no knowledge about for now) ? This post in mainly for you experts to either point me to the right resource and tell me what is done as best practice. things like dos and dons.Thanks for reading this and please let me have a clair idea about how to start implementing those features. thanks again

    Read the article

  • Rails 2.3 session

    - by Sam Kong
    Hi, I am developing a rails 2.3.2 app. I need to keep session_id for an order record, retrieve it and finally delete the session_id when the order is completed. It worked when I used cookies as session store but it doesn't for active_record store. (I restarted my browser, so no cache issue.) I know rails 2.3 implements lazy session load. I read some info about it but am still confused. Can somebody clarify how I use session_id for such a case? What I am doing is... A user make an order going through several pages. There is no sign-up, neither login. So I keep session_id in the order record so that no other user can access the order. @order = Order.last :conditions = {:id = params[:id], :session_id = session[:session_id] } When the order is finished, I set nil to session_id column. How would you implement such a case in lazy session(and active_record store) environment? Thanks. Sam

    Read the article

  • ETL , Esper or Drools?

    - by geoaxis
    Hello, The question environment relates to JavaEE, Spring I am developing a system which can start and stop arbitrary TCP (or other) listeners for incoming messages. There could be a need to authenticate these messages. These messages need to be parsed and stored in some other entities. These entities model which fields they store. So for example if I have property1 that can have two text fields FillLevel1 and FillLevel2, I could receive messages on TCP which have both fill levels specified in text as F1=100;F2=90 Later I could add another filed say FillLevel3 when I start receiving messages F1=xx;F2=xx;F3=xx. But this is a conscious decision on the part of system modeler. My question is what do you think is better to use for parsing and storing the message. ETL (using Pantaho, which is used in other system) where you store the raw message and use task executor to consume them one by one and store the transformed messages as per your rules. One could use Espr or Drools to do the same thing , storing rules and executing them with timer, but I am not sure how dynamic you could get with making rules (they have to be made by end user in a running system and preferably in most user friendly way, ie no scripts or code, only GUI) The end user should be capable of changing the parse rules. It is also possible that end user might want to change the archived data as well (for example in the above example if a new value of FillLevel is added, one would like to put a FillLevel=-99 in the previous values to make the data consistent). Please ask for explanations, I have the feeling that I need to revise this question a bit. Thanks

    Read the article

  • Full-text search on App Engine with Whoosh

    - by Martin
    I need to do full text searching with Google App Engine. I found the project Whoosh and it works really well, as long as I use the App Engine Development Environement... When I upload my application to App Engine, I am getting the following TraceBack. For my tests, I am using the example application provided in this project. Any idea of what I am doing wrong? <type 'exceptions.ImportError'>: cannot import name loads Traceback (most recent call last): File "/base/data/home/apps/myapp/1.334374478538362709/hello.py", line 6, in <module> from whoosh import store File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/__init__.py", line 17, in <module> from whoosh.index import open_dir, create_in File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/index.py", line 31, in <module> from whoosh import fields, store File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/store.py", line 27, in <module> from whoosh import tables File "/base/data/home/apps/myapp/1.334374478538362709/whoosh/tables.py", line 43, in <module> from marshal import loads Here is the import I have in my Python file. # Whoosh ---------------------------------------------------------------------- sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'utils'))) from whoosh.fields import Schema, STORED, ID, KEYWORD, TEXT from whoosh.index import getdatastoreindex from whoosh.qparser import QueryParser, MultifieldParser Thank you in advance for your help!

    Read the article

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • Error in asp.net c# code (mysql database connection)

    - by Ishan
    My code is to update a record if it already exists in database else insert as a new record. My code is as follows: protected void Button3_Click(object sender, EventArgs e) { OdbcConnection MyConnection = new OdbcConnection("Driver={MySQL ODBC 3.51 Driver};Server=localhost;Database=testcase;User=root;Password=root;Option=3;"); MyConnection.Open(); String MyString = "select fil_no,orderdate from temp_save where fil_no=? and orderdate=?"; OdbcCommand MyCmd = new OdbcCommand(MyString, MyConnection); MyCmd.Parameters.AddWithValue("", HiddenField4.Value); MyCmd.Parameters.AddWithValue("", TextBox3.Text); using (OdbcDataReader MyReader4 = MyCmd.ExecuteReader()) { //** if (MyReader4.Read()) { String MyString1 = "UPDATE temp_save SET order=? where fil_no=? AND orderdate=?"; OdbcCommand MyCmd1 = new OdbcCommand(MyString1, MyConnection); MyCmd1.Parameters.AddWithValue("", Editor1.Content.ToString()); MyCmd1.Parameters.AddWithValue("", HiddenField1.Value); MyCmd1.Parameters.AddWithValue("", TextBox3.Text); MyCmd1.ExecuteNonQuery(); } else { // set the SQL string String strSQL = "INSERT INTO temp_save (fil_no,order,orderdate) " + "VALUES (?,?,?)"; // Create the Command and set its properties OdbcCommand objCmd = new OdbcCommand(strSQL, MyConnection); objCmd.Parameters.AddWithValue("", HiddenField4.Value); objCmd.Parameters.AddWithValue("", Editor1.Content.ToString()); objCmd.Parameters.AddWithValue("", TextBox3.Text); // execute the command objCmd.ExecuteNonQuery(); } } } I am getting the error as: ERROR [42000] [MySQL][ODBC 3.51 Driver][mysqld-5.1.51-community]You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'order,orderdate) VALUES ('04050040272009','&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&' at line 1 The datatype for fields in table temp_save are: fil_no-->INT(15)( to store a 15 digit number) order-->LONGTEXT(to store contents from HTMLEditor(ajax control)) orderdate-->DATE(to store date) Please help me to resolve my error.

    Read the article

  • Python Django Global Variables

    - by Joe J
    Hi all, I'm looking for simple but recommended way in Django to store a variable in memory only. When Apache restarts or the Django development server restarts, the variable is reset back to 0. More specifically, I want to count how many times a particular action takes place on each model instance (database record), but for performance reasons, I don't want to store these counts in the database. I don't care if the counts disappear after a server restart. But as long as the server is up, I want these counts to be consistent between the Django shell and the web interface, and I want to be able to return how many times the action has taken place on each model instance. I don't want the variables to be associated with a user or session because I might want to return these counts without being logged in (and I want the counts to be consistent no matter what user is logged in). Am I describing a global variable? If so, how do I use one in Django? I've noticed the files like the urls.py, settings.py and models.py seems to be parsed only once per server startup (by contrast to the views.py which seems to be parsed eache time a request is made). Does this mean I should declare my variables in one of those files? Or should I store it in a model attribute somehow (as long as it sticks around for as long as the server is running)? This is probably an easy question, but I'm just not sure how it's done in Django. Any comments or advice is much appreciated. Thanks, Joe

    Read the article

  • iPhone and Core Data: how to retain user-entered data between updates?

    - by Shaggy Frog
    Consider an iPhone application that is a catalogue of animals. The application should allow the user to add custom information for each animal -- let's say a rating (on a scale of 1 to 5), as well as some notes they can enter in about the animal. However, the user won't be able to modify the animal data itself. Assume that when the application gets updated, it should be easy for the (static) catalogue part to change, but we'd like the (dynamic) custom user information part to be retained between updates, so the user doesn't lose any of their custom information. We'd probably want to use Core Data to build this app. Let's also say that we have a previous process already in place to read in animal data to pre-populate the backing (SQLite) store that Core Data uses. We can embed this database file into the application bundle itself, since it doesn't get modified. When a user downloads an update to the application, the new version will include the latest (static) animal catalogue database, so we don't ever have to worry about it being out of date. But, now the tricky part: how do we store the (dynamic) user custom data in a sound manner? My first thought is that the (dynamic) database should be stored in the Documents directory for the app, so application updates don't clobber the existing data. Am I correct? My second thought is that since the (dynamic) user custom data database is not in the same store as the (static) animal catalogue, we can't naively make a relationship between the Rating and the Notes entities (in one database) and the Animal entity (in the other database). In this case, I would imagine one solution would be to have an "animalName" string property in the Rating/Notes entity, and match it up at runtime. Is this the best way to do it, or is there a way to "sync" two different databases in Core Data?

    Read the article

  • Scripts to parse and download iTunes Connect and AppStore data

    - by bradhouse
    I'm looking for recommendations of a script or series of scripts that download and parse iTunes Connect sales data and AppStore comments, ratings and rankings data for a defined app. I'm also aware of solutions like: AppViz appsales-mobile iphone-stats Heartbeat.app I'm sure I'll find a few more with more searching. I can't help but feel there must be a really decent set of open source scripts out there to do this, given how many developers are now writing apps for the AppStore. Would be interested to hear any commercial offerings as well (although my personal preference is for open source, so I can at least see what it is doing with my iTunes Connect login credentials). To be clear, I'm really looking for something that hits all of the areas mentioned: App Store (per store) Comments Ratings Category/store rankings iTunes Connect The contents of the sales reports Analysis/graphs of the data is not necessary (but would be a nice to have I guess). I'm not really looking for something like AppSales Mobile above, I would like the raw data so I can do my own analysis and formatting. So far it looks like AppViz (listed above) is the best out there. Any suggestions on what is good/available or should I just go roll my own?

    Read the article

  • Caching sitemaps in django

    - by michuk
    I implemented a simple sitemap class using django's default sitemap app. As it was taking a long time to execute, I added manual caching: class ShortReviewsSitemap(Sitemap): changefreq = "hourly" priority = 0.7 def items(self): # try to retrieve from cache result = get_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews") if result!=None: return result result = ShortReview.objects.all().order_by("-created_at") # store in cache set_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews", result) return result def lastmod(self, obj): return obj.updated_at The problem is that memcache allows only max 1MB object. This one was bigger that 1MB, so storing into cache failed: >7 SERVER_ERROR object too large for cache The problem is that django has an automated way of deciding when it should divide the sitemap file into smalled ones. According to the docs (http://docs.djangoproject.com/en/dev/ref/contrib/sitemaps/): You should create an index file if one of your sitemaps has more than 50,000 URLs. In this case, Django will automatically paginate the sitemap, and the index will reflect that. What do you think would be the best way to enable caching sitemaps? - Hacking into django sitemaps framework to restrict a single sitemap size to, let's say, 10,000 records seems like the best idea. Why was 50,000 chosen in the first place? Google advice? random number? - Or maybe there is a way to allow memcached store bigger files? - Or perhaps onces saved, the sitemaps should be made available as static files? This would mean that instead of caching with memcached I'd have to manually store the results in the filesystem and retrieve them from there next time when the sitemap is requested (perhaps cleaning the directory daily in a cron job). All those seem very low level and I'm wondering if an obvious solution exists...

    Read the article

  • Display a tree dijit using zend framework

    - by churris43
    I am trying to display a tree of categories and subcategories by using dijits with zend framework. Haven't been able to find a good example. This is what I've got: Basically I got the following code as my action: class SubcategoriesController extends Zend_Controller_Action{ ..... public function loadtreeAction() { Zend_Dojo::enableView($this->view); Zend_Layout::getMvcInstance()->disableLayout(); //Creating a sample tree of categories and subcategories $a["cat1"]["id"] = "id1"; $a["cat1"]["name"] = "Category1"; $a["cat1"]["type"] = "category"; $subcat1 = array("id" => "Subcat1","name" => "Subcategory1" , "type" => "subcategory"); $subcat2 = array("id" => "Subcat12","name" => "Subcategory12" , "type" => "subcategory"); $a["cat1"]["children"] = array($subcat1,$subcat2); $treeObj = new Zend_Dojo_Data('id', $a); $treeObj->setLabel('name'); $this->view->tree = $treeObj->toJson(); } .... } And on my view: <?php $this->dojo()->requireModule('dojo.data.ItemFileReadStore'); $this->dojo()->requireModule('dijit.Tree'); $this->dojo()->requireModule('dojo.parser'); ?> <div dojoType="dojo.data.ItemFileReadStore" url="/Subcategories/loadtree" jsId="store"></div> <div dojoType="dijit.tree.ForestStoreModel" jsId="treeModel" store="store" rootId="root" rootLabel="List of Categories" childrenAttrs="children" query="{type:'category'}"></div> <div dojoType="dijit.Tree" model="treeModel" labelAttrs="ListOfCategories"></div> It doesn't even seem to try to load the tree at all. Any help is appreciated

    Read the article

  • Does LaTeX have an array data structure?

    - by drasto
    Are there arrays in LaTeX? I don't mean the way to typeset arrays. I mean arrays as the data structure in LaTeX/TeX as a "programming language". I need to store a number of vbox-es or hbox-es in an array. It may be something like "an array of macros". More details: I have an environment that should typeset songs. I need to store some songs' paragraphs given as arguments to my macro \songparagraph (so I will not typeset them, just store those paragraphs). As I don't know how many paragraphs can be in one particular song I need an array for this. When the environment is closed, all the paragraphs will be typeset - but they will be first measured and the best placement for each paragraph will be computed (for example, some paragraphs can be put one aside the other in two columns to make the song look more compact and save some space). Any ideas would be welcome. Please, if you know about arrays in LaTeX, post a link to some basic documentation, tutorial or just state basic commands.

    Read the article

  • Are there arrays in Latex(not asking for the way to typeset them but to use for programing)

    - by drasto
    Are there arrays in Latex ? I don't mean the way to typeset arrays. I mean arrays as the data structure in latex/tex as a "programming language". I need to store a number of vbox-es or hbox-es in array. Or it may by something like "an array of macros". More details: I have an environment that should typeset songs. I need to store some songs paragraphs given as arguments to my macro \songparagraph(so I will not typeset them, just store those paragraphs). As I don't know how many paragraphs can by in one particular song I need an array for this... When the environment is closed all the paragraphs will be typeset - but they will be first measured and the best placement for each paragraph will be computed(for example, some paragraphs can be put one aside other in two columns to make the song look more compact and save some space). Any ides would be welcome. Please if you know about that there are some arrays in latex post a link to some basic documentation, tutorial or just state basic commands.

    Read the article

  • Caching and accessing configuration data in ASP.NET MVC app.

    - by Sosh
    I'm about to take a look at how to implement internationalisation for an ASP.NET MVC project. I'm looking at how to allow the user to change languages. My initial though is a dropdownlist containing each of the supported langauages. Whoever a few questions have come to mind: How to store the list of supported languages? (e.g. just "en", "English"; "fr", "French" etc.) An xml file? .config files? If I store this in a file I'll have to cache this (at startup I guess). So, what would be best, load the xml data into a list (somehow) and store this list in the System.Web.Cache? Application State? How then to load this data into the view (for display in a dropdown)? Give the view direct access to the cache? Just want to make sure I'm going in the right direction here... Thank you.

    Read the article

  • Java - encrypt / decrypt user name and password from a configuration file

    - by nzpcmad
    We are busy developing a Java web service for a client. There are two possible choices: Store the encrypted user name / password on the web service client. Read from a config. file on the client side, decrypt and send. Store the encrypted user name / password on the web server. Read from a config. file on the web server, decrypt and use in the web service. The user name / password is used by the web service to access a third-party application. The client already has classes that provide this functionality but this approach involves sending the user name / password in the clear (albeit within the intranet). They would prefer storing the info. within the web service but don't really want to pay for something they already have. (Security is not a big consideration because it's only within their intranet). So we need something quick and easy in Java. Any recommendations? The server is Tomkat 5.5. The web service is Axis2. What encrypt / decrypt package should we use? What about a key store? What configuration mechanism should we use? Will this be easy to deploy?

    Read the article

  • Using variables inside macros in SQL

    - by Tim
    Hello I'm wanting to use variables inside my macro SQL on Teradata. I thought I could do something like the following: REPLACE MACRO DbName.MyMacro ( MacroNm VARCHAR(50) ) AS ( /* Variable to store last time the macro was run */ DECLARE V_LAST_RUN_DATE TIMESTAMP; /* Get last run date and store in V_LAST_RUN_DATE */ SELECT LastDate INTO V_LAST_RUN_DATE FROM DbName.RunLog WHERE MacroNm = :MacroNm; /* Update the last run date to now and save the old date in history */ EXECUTE MACRO DbName.RunLogUpdater( :MacroNm ,V_LAST_RUN_DATE ,CURRENT_TIMESTAMP ); ); However, that didn't work, so I thought of this instead: REPLACE MACRO DbName.MyMacro ( MacroNm VARCHAR(50) ) AS ( /* Variable to store last time the macro was run */ CREATE VOLATILE TABLE MacroVars AS ( SELECT LastDate AS V_LAST_RUN_DATE FROM DbName.RunLog WHERE MacroNm = :MacroNm; ) WITH DATA ON COMMIT PRESERVE ROWS; /* Update the last run date to now and save the old date in history */ EXECUTE MACRO DbName.RunLogUpdater( :MacroNm ,SELECT V_LAST_RUN_DATE FROM MacroVars ,CURRENT_TIMESTAMP ); ); I can do what I'm looking for with a Stored Procedure, however I want to avoid for performance. Do you have any ideas about this? Is there anything else I can try? Cheers Tim

    Read the article

  • Building a custom (dynamic) dataset and grid

    - by marko.ivanovski.nz
    Hi, I'm in the process of building a dynamic table in which you can add/remove rows & columns so it varies in size depending on what the user wants. Its purpose is to store properties for a product, but there can be from 1 to 10 different properties(columns) per product, and multiple instances(rows) of the product as well. Here's a screenshot of what I mean http://i40.tinypic.com/nbqkxc.jpg As you can see I need the structure to be completely up to the client which is where I'm getting stuck. I've started writing a custom DataSet that has "add column" & "add row" buttons, and have built a custom Table with Textboxes in each cell which builds from that dataset. I have no idea how to store the data on submit though, and to make it even more complex I need to store this in the database as a string which I think I can do by converting it to XML. Any help is appreciated, I think I just need a pointer in the right direction and am happy to do research from there. Thanks in advance. Marko

    Read the article

  • Core Data Inferred Migration – Automatic "lightweight" vs Manual

    - by ohhorob
    I've updated the model of an existing iPhone app in some simple ways (remove attribute, add attribute, remove index), and can use automatic lightweight migration to migrate the persistent store. Due to the typical size of the data set, the processing time is not insignificant, and warrants feedback for the user. NSMigrationManager provides a simple but useful migrationProgress value that sends KVO notifications as the migration is performed. That forms the basis of providing feedback, however attempting to use an inferred model ([NSMappingModel inferredMappingModelForSourceModel:destinationModel:error:]) results in drastically different timing for the exact same dataset. Profile results on and original iPhone (2G) Automatic inferred lightweight migration PROFILE: CacheManager -migrateStore PROFILE: 0.6130 (+0.6130) models loaded PROFILE: 1.1759 (+0.5629) delegate -CacheManagerWillMigrate: PROFILE: 1.2516 (+0.0757) persistent store coordinator loaded PROFILE: 5.1436 (+3.8920) automatic lightweight migration completed PROFILE: 5.5435 (+0.3999) delegate -CacheManagerDidFinishMigration:withError: Manual inferred migration PROFILE: CacheManager -migrateStore PROFILE: 0.6660 (+0.6660) models loaded PROFILE: 1.1471 (+0.4811) inferred mapping model generated PROFILE: 1.4046 (+0.2574) delegate -CacheManagerWillMigrate: PROFILE: 1.5058 (+0.1013) persistent store coordinator loaded PROFILE: 22.6952 (+21.1894) manual migration completed PROFILE: 23.1478 (+0.4525) delegate -CacheManagerDidFinishMigration:withError: So, with an inferred model, the manual migration takes over 5 times longer than automatic! It's a big inconsistency, and the lightweight option that NSPersistentStoreCoordinator -addPersistentStoreWithType:configuration:URL:options:error: provides absolutely no indication of progress while processing. Can anybody provide a supported way to get the migrationProgress values during automatic migration, OR a way to configure an inferred mapping model to be as fast during manual processing as automatic?

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >