Search Results

Search found 11618 results on 465 pages for 'shared storage'.

Page 415/465 | < Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >

  • WinRT 8.1 Phone - ListView reordering

    - by Jan Kratochvil
    I need to create a reorderable ListView in a Windows Phone 8.1 app created using WinRT. The XAML is the following (it binds to an ObservableDictionary in the codebehind): <Grid Margin="24"> <ListView x:Name="MainListView" CanDragItems="True" CanReorderItems="True" AllowDrop="True" VerticalAlignment="Stretch" HorizontalAlignment="Stretch" > <ListView.ItemTemplate> <DataTemplate> <Border Padding="24" Margin="16" Background="CadetBlue"> <TextBlock Text="{Binding}" /> </Border> </DataTemplate> </ListView.ItemTemplate> </ListView> </Grid> The ListView does nothing when I try to reorder the items (it looks like the "reordering mode" is not activated). When I run this sample in Windows 8.1 (the XAML is shared) it works as expected. According to the documentation Windows Phone 8.1 should be supported. Is this functionality supported on the phone (and the documentation wrong) or do I need to do something special for the phone?

    Read the article

  • Catching the return of main function before it deallocates resources

    - by EpsilonVector
    I'm trying to implement user threads in Linux kernel 2.4, and I ran into something problematic and unexpected. Background: a thread basically executes a single function and dies, except that when I call thread_create for the first time it must turn main() into a thread as well (by default it is not a thread until the first call, which is also when all the related data structures are allocated). Since a thread executes a function and dies, we don't need to "return" anywhere with it, but we do need to save the return value to be reclaimed later with thread_join, so the hack I came up with was: when I allocate the thread stack I place a return address that points to a thread_return_handler function, which deallocates the thread, makes it a zombie, and saves its return value for later. This works for "just run a function and die" threads, but is very problematic with the main thread. Since it actually is the main function, if it returns before the other threads finish the normal return mechanism kicks in, and deallocates all the shared resources, thus screwing up all the running threads. I need to keep it from doing that. Any ideas on how it can be done?

    Read the article

  • Cross-platform HTML application options

    - by Charles
    I'd like to develop a stand-alone desktop application targeting Windows (XP through 7) and Mac (Tiger through Snow Leopard), and if possible iPhone and Android. In order to make it all work with as much common code as possible (and because it's the only thing I'm good at), I'd like to handle the main logic with HTML and JS. Using Adobe AIR is a possibility. And I think I can do this with various application wrappers, using .NET for Windows XP, Objective C for iPhone, Java for Android and native "widget" platform support for Mac and Windows Vista & 7 (though I'd like to keep the widget in the foreground, so the Mac dashboard isn't ideal). Does anyone have any suggestions on where to start? The two sticking points are: I'll certainly need some form of persistent storage (cookies perhaps) to keep state between sessions I'll also probably need access to remote data files, so if I use AJAX and the hosting HTML file resides on the device, it will need to be able to do cross-domain requests. I've done this on the iPhone without any problems, but I'd be surprised if this were possible on other platforms. For me, Android and iPhone will be the easiest to handle, and it looks like I can use Adobe AIR to handle the rest. But I wanted to know if there are any other alternatives. Does anyone have any suggesions?

    Read the article

  • checking mongo database for data

    - by user1647484
    I'm playing around with this tutorial that uses Sinatra, backbone.js, and mongodb for the database. It's my first time using mongo. As far as I understand it the app uses both local storage and a database. it has these routes for the database. For example, it has these routes get '/api/:thing' do DB.collection(params[:thing]).find.to_a.map{|t| from_bson_id(t)}.to_json end get '/api/:thing/:id' do from_bson_id(DB.collection(params[:thing]).find_one(to_bson_id(params[:id]))).to_json end post '/api/:thing' do oid = DB.collection(params[:thing]).insert(JSON.parse(request.body.read.to_s)) "{\"_id\": \"#{oid.to_s}\"}" end After turning the server off and then on, I could see in the server getting data from the database routes 127.0.0.1 - - [17/Sep/2012 08:21:58] "GET /api/todos HTTP/1.1" 200 430 0.0033 My question is, how can I check from within the mongo shell whether the data's in the database? I started the mongo shell ./bin/mongo I selected the database 'use mydb' and then looking at the docs (http://www.mongodb.org/display/DOCS/Tutorial) I tried commands such as > var cursor = db.things.find(); > while (cursor.hasNext()) printjson(cursor.next()); but they didn't return anything.

    Read the article

  • Disabling Remote-Debugging Connection on Windows 2000

    - by Kim Leeper
    I have two machines one running Win 2000 and one running Win XP both with VSC++ 6. I created an application on the Win XP machine (local) and sucessfuly used the Win2000 machine (remote) as the target for debugging. The code was in a shared drive on the Win2000 machine. This setup worked well, just like in the movies! However, I now wish to use my Win2000 machine as a develpment machine again and I find I can not. When attempting to execute a natively compiled application on this machine, I get a dialog with the title of "Remote Execution Path And Filename" asking me for same. When dialog is cancelled as the program that is attemptng to execute is not remote, the program terminates without error. Extra info! On the WinXP machine, VSC++, under the Build menu-Debugger Remote Connection-Remote Connection Dialog-the "Connection:" list box has two entries 'Local' and 'Network', on the Win2000 machine the 'Local' entry is not present, only the entry for 'Network'. How do I get back my 'Local' entry on what used to be my target machine?

    Read the article

  • What tools do you have at your disposal as a manager to promote a way of thinking

    - by John Leidegren
    This question goes beoynd just programming, but I'd like to get some input on this, if that's okay with the community. Preferably from people that do a lot of coding themselves but also manage other people coding. My problem is this. We have all these ideas that we know is good for the overall strategy of the company and the problem is not figuring out what to do, it's to come about this change. Just telling someone to do things differently isn't enough and it's hard to promote a mind set that is shared within all of the company, (this will take time). If I could jump forward I'd like it if we could create a very nurturing company culture that promotes these ideals cross all areas but I'm not sure what tools to use. And by tools I mean anything I'm legally permitted to do. e.g. we could talk about, we could arrange traning sessions, we could spend more time in meeting (talk about it more), we could spend more time designing, we could spend more time pair-programming, we could add/remove incentive or we could encurage more play. Ultimately if we did all of these things what will be the recurring theme that ties this together. I'd like to be able to answer the question -- why should we do things like this? -- and come up with an answer that explains how important it is to think about our ideals from begining to end. I've puposly avoided to talk about or specifics of the situtation becuase I believe that it narrows things down too much. But I guess, by know you either know how to answer this question or you're as confused as I am ;) I'd love to hear from people who had to bring about a change in order to go from chaos to order, or fix something in the organization which wasn't working. And I'd like to hear it from the perspective of the developer and designer. -- or -- You could simply weigh in on what are the most important qualities in an organization encurage or stimulate rigid fun development cycle from start to finish?

    Read the article

  • Drupal 6 Validation for Form Callback Function

    - by Wade
    I have a simple form with a select menu on the node display page. Is there an easy way to validate the form in my callback function? By validation I don't mean anything advanced, just to check that the values actually existed in the form array. For example, without ajax, if my select menu has 3 items and I add a 4th item and try to submit the form, drupal will give an error saying something similar to "an illegal choice was made, please contact the admin." With ajax this 4th item you created would get saved into the database. So do I have to write validation like if ($select_item > 0 && $select_item <= 3) { //insert into db } Or is there an easier way that will check that the item actually existed in the form array? I'm hoping there is since without ajax, drupal will not submit the form if it was manipulated. Thanks. EDIT: So I basically need this in my callback function? $form_state = array('storage' => NULL, 'submitted' => FALSE); $form_build_id = $_POST['form_build_id']; $form = form_get_cache($form_build_id, $form_state); $args = $form['#parameters']; $form_id = array_shift($args); $form_state['post'] = $form['#post'] = $_POST; $form['#programmed'] = $form['#redirect'] = FALSE; drupal_process_form($form_id, $form, $form_state); To get $_POST['form_build_id'], I sent it as a data param, is that right? Where I use form_get_cache, looks like there is no data. Kind of lost now.

    Read the article

  • Error monitoring/handling on webservers

    - by Industrial
    Hi everybody, We have a web server that we're about to launch a number of applications onto. They will all share database and memcached servers, but each application has it's own mySQL database and all memcached keys per application, is prefixed. Possible scenario: If a memcached server in our cluster goes boom, we want someone (operative system admin) to be automatically contacted by email/iphone push notification or in any other appropriate way. If we we're about to install 150 identical applications for our customers on our servers, and a memcached server dies - all 150 applications will individually find this out and contact our system admin, which most certainly is going to think about getting a new job where he or she isn't about to be woken up by getting 150 messages sent 4:15 in the morning. Possible solution: One idea is to set up an external server for error handling that gets a $_POST or cURL request sent, and handles storage of the error message depending on the seriousness of the actual error message. It would of course check upon receiving the error call, that if the same memcached server have already been reported as offline, there would be no need to spam the system admin with additional reminders... The questions: What's a good approach on how to handle errors? How does the big guys in the industry handle this? Thanks!

    Read the article

  • PHP - file uploads and ways to prevent viruses from being uploaded in zip/rar archives

    - by Joe
    I am trying to provide a service on my website to allow users to upload files so others can download them. The issue is, since some of these files I will allow to upload will be .zip/.rar files, I am curious as to what ideas exist to help prevent the uploading of archives with Viruses/trojans etc. included. Some .zip files will include legitimate .exe files,though I am not sure what options I have. I thought about it and I don't have a method for verifying with a virus scanner on the server, since I am on shared hosting w/o the option to run a service like that... nor do I have the knowledge on how to do that. I am also aware there is no php class or database to scan the files for viruses. This means, my only options are to rely on: a). manual approval <-- not an acceptable option for me as it might become a busy site with thousands of uploads b). get the users to somehow point out it if has viruses through voting or "flagging", etc.... anyway, regarding "b" - what ideas would you suggest?

    Read the article

  • J2EE and alternatives

    - by Ilya K
    Hello, I am J2SE developer but I have rich web-background (php, perl/cgi and so on) and now I am starting new project. It will have web interface, spaghetti business logic, relational database as storage and connections to other services. I do it from the scratch. My colleagues told me to use spring, spring security and struts. I look briefly at J2EE spec and found that it covers almost all aspects of enterprise application. I asked my colleagues why do they need spring and struts, but looks like they use technologies simply because they are familiar with them and not familiar with classic J2EE stack. So, my question is: what is bad about J2EE? Why do I need spring if there are JNDI lookups? It will take a day or two to create fake InitialContext for unit-tests. And that is all: I stand with out of external tools like spring. Why do I need spring-security if there is a security built in Servlets spec? I can map any request to any servlet using web.xml, no struts.xml is needed. I can use servlet-filters instead of struts interceptors. There is RMI, so I do not need spring-remote. And so on.. Why should I bother my self with all that fancy stuff if there is J2EE? I really want to find situation when J2EE is not enough. Do you have any? Thanks!

    Read the article

  • Use of extern in C++ dll

    - by dom_beau
    I declare then instantiate a static variable in a DLL. // DLL.h class A { //... }; static A* a; // DLL.cpp A* a = new A; So far, so good... I was suggested to use extern rather than static. extern A* a; // in DLL.h No problem with that but the extern variable must be declared somewhere. I got Invalid storage class member. In other words, what I was used to do is to declare a variable in a source file like this: // In src.cpp A a; then extern declare it in another source file in the same project: // In src2.cpp extern A a; so it is the same object a at link time. Maybe it is not the right thing to do? So, where to declare the variable that is now extern? Note that I used static declaration in order to see the variable instantiated as soon as the dll is loaded. Note that the current use of static works most of the time but I think I observe a delay or something like this in the variable instantiation while it should always be instantiated at load time. I'm investigating this problem for a week now and I can't find no solution.

    Read the article

  • trouble calculating offset index into 3D array

    - by Derek
    Hello, I am writing a CUDA kernel to create a 3x3 covariance matrix for each location in the rows*cols main matrix. So that 3D matrix is rows*cols*9 in size, which i allocated in a single malloc accordingly. I need to access this in a single index value the 9 values of the 3x3 covariance matrix get their values set according to the appropriate row r and column c from some other 2D arrays. In other words - I need to calculate the appropriate index to access the 9 elements of the 3x3 covariance matrix, as well as the row and column offset of the 2D matrices that are inputs to the value, as well as the appropriate index for the storage array. i have tried to simplify it down to the following: //I am calling this kernel with 1D blocks who are 512 cols x 1row. TILE_WIDTH=512 int bx = blockIdx.x; int by = blockIdx.y; int tx = threadIdx.x; int ty = threadIdx.y; int r = by + ty; int c = bx*TILE_WIDTH + tx; int offset = r*cols+c; int ndx = r*cols*rows + c*cols; if((r < rows) && (c < cols)){ //this IF statement is trying to avoid the case where a threadblock went bigger than my original array..not sure if correct d_cov[ndx + 0] = otherArray[offset]; d_cov[ndx + 1] = otherArray[offset] d_cov[ndx + 2] = otherArray[offset] d_cov[ndx + 3] = otherArray[offset] d_cov[ndx + 4] = otherArray[offset] d_cov[ndx + 5] = otherArray[offset] d_cov[ndx + 6] = otherArray[offset] d_cov[ndx + 7] = otherArray[offset] d_cov[ndx + 8] = otherArray[offset] } When I check this array with the values calculated on the CPU, which loops over i=rows, j=cols, k = 1..9 The results do not match up. in other words d_cov[i*rows*cols + j*cols + k] != correctAnswer[i][j][k] Can anyone give me any tips on how to sovle this problem? Is it an indexing problem, or some other logic error?

    Read the article

  • Suitable data structures for saving files in localStorage (HTML5) ?

    - by WmasterJ
    It is nice when there isn't a DB to maintain and users to authenticate. My professor has asked me to convert a recent research project of his that uses Bespin and calculates errors made by users in a code editor as part of his research. The goal is to convert from MySQL to using HTML5 localStorage completely. Doesn't seem so hard to do, even though digging in his code might take some time. Question: I need to store files and state (last placement of cursor and active file). I have already done so by implementing the recommendations in another stackoverflow thread. But would like your input considering how to structure the content to use. My current solution Hashmap like solution with javascript objects: files = {}; // later, saving files[fileName] = data; And then storing in localStorage using some recommendations localStorage.setObject("files", files); // Note that setObject(key, data) does not exist but is added // using Storage.prototype.setObject = function() {... Currently I'm also considering using some type of numeric id. So that names can be changed without any hassle renaming the key in the hashmap. What is your opinion on the way it is solved and would you do it any differently?

    Read the article

  • Rails nested attributes with a join model, where one of the models being joined is a new record

    - by gzuki
    I'm trying to build a grid, in rails, for entering data. It has rows and columns, and rows and columns are joined by cells. In my view, I need for the grid to be able to handle having 'new' rows and columns on the edge, so that if you type in them and then submit, they are automatically generated, and their shared cells are connected to them correctly. I want to be able to do this without JS. Rails nested attributes fail to handle being mapped to both a new record and a new column, they can only do one or the other. The reason is that they are a nested specifically in one of the two models, and whichever one they aren't nested in will have no id (since it doesn't exist yet), and when pushed through accepts_nested_attributes_for on the top level Grid model, they will only be bound to the new object created for whatever they were nested in. How can I handle this? Do I have to override rails handling of nested attributes? My models look like this, btw: class Grid < ActiveRecord::Base has_many :rows has_many :columns has_many :cells, :through => :rows accepts_nested_attributes_for :rows, :allow_destroy => true, :reject_if => lambda {|a| a[:description].blank? } accepts_nested_attributes_for :columns, :allow_destroy => true, :reject_if => lambda {|a| a[:description].blank? } end class Column < ActiveRecord::Base belongs_to :grid has_many :cells, :dependent => :destroy has_many :rows, :through => :grid end class Row < ActiveRecord::Base belongs_to :grid has_many :cells, :dependent => :destroy has_many :columns, :through => :grid accepts_nested_attributes_for :cells end class Cell < ActiveRecord::Base belongs_to :row belongs_to :column has_one :grid, :through => :row end

    Read the article

  • Cross-platform build UNC share (Windows->Linux) - possible to be case-sensitive on CIFS share?

    - by holtavolt
    To optimize builds between Windows and Linux (Ubuntu 10.04), I've got a UNC share of the source tree that is shared between systems, and all build output goes to local disk on each system. This mostly works great, as source updates and changes can quickly be tested on both systems, but there's one annoying limitation I can't find a way around, which is that the Linux CIFS mount is case-insensitive. Consequently, a test compile of code that has an error like: #include "Foo.h" for a file foo.h, will not be caught by a test build (until a local compile is done on the Linux box, e.g. nightly builds) Is it possible to have case-sensitivity of the Windows UNC share on the Linux box? I've tried a variety of fstab and mount combinations with no success, as well as editing the smb.config to set "case sensitive = yes" Given what the Ubuntu man page info states on this: nocase Request case insensitive path name matching (case sensitive is the default if the server suports it). I suspect that this is a limitation from the Windows UNC side, and there's nothing to be done short of switching to some other mechanism (is NFS still viable anywhere?) If anyone has already solved this to support optimized cross-platform build environments, I'd appreciate hearing about it!

    Read the article

  • Can I split a single SQL 2008 DB Table into multiple filegroups, based on a discriminator column?

    - by Pure.Krome
    Hi folks, I've got a SQL Server 2008 R2 database which has a number of tables. Two of these tables contains a lot of large data .. mainly because one of them is VARBINARY(MAX) and the sister table is GEOGRAPHY. (Why two tables? Read Below if you're interested***) The data in these tables are geospatial shapes, such as zipcode boundaries. Now, the first 70K odd rows are for DataType = 1 the rest 5mil rows are for DataType = 2 Now, is it possible to split the table data into two files? so all rows that are for DataType != 2 goes into File_A and DataType = 2 goes into File_B? This way, when I backup the DB, I can skip adding File_B so my download is waaaaay smaller? Is this possible? I guessing you might be thinking - why not keep them as TWO extra tables? Mainly because in the code, the data is conceptually the same .. it's just happens that I want to split the storage of this model data. It really messes up my model if I now how two aggregates in my model, instead of one. ***Entity Framework doesn't like Tables with GEOGRAPHY, so i have to create a new table which transforms the GEOGRAPHY to VARBINARY, and then drop that into EF.

    Read the article

  • File upload fails when user is authenticated. Using IIS7 Integrated mode.

    - by Nikkelmann
    These are the user identities my website tells me that it uses: Logged on: NT AUTHORITY\NETWORK SERVICE (Can not write any files at all) and Not logged on: WSW32\IUSR_77 (Can write files to any folder) I have a ASP.NET 4.0 website on a shared hosting IIS7 web server running in Integrated mode with 32-bit applications support enabled and MSSQL 2008. Using classic mode is not an option since I need to secure some static files and I use Routing. In my web.config file I have set the following: <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> My hosting company says that Impersonation is enabled by default on machine level, so this is not something I can change. I asked their support and they referred me to this article: http://www.codinghub.net/2010/08/differences-between-integrated-mode-and.html Citing this part: Different windows identity in Forms authentication When Forms Authentication is used by an application and anonymous access is allowed, the Integrated mode identity differs from the Classic mode identity in the following ways: * ServerVariables["LOGON_USER"] is filled. * Request.LogognUserIdentity uses the credentials of the [NT AUTHORITY\NETWORK SERVICE] account instead of the [NT AUTHORITY\INTERNET USER] account. This behavior occurs because authentication is performed in a single stage in Integrated mode. Conversely, in Classic mode, authentication occurs first with IIS 7.0 using anonymous access, and then with ASP.NET using Forms authentication. Thus, the result of the authentication is always a single user-- the Forms authentication user. AUTH_USER/LOGON_USER returns this same user because the Forms authentication user credentials are synchronized between IIS 7.0 and ASP.NET. A side effect is that LOGON_USER, HttpRequest.LogonUserIdentity, and impersonation no longer can access the Anonymous user credentials that IIS 7.0 would have authenticated by using Classic mode. How do I set up my website so that it can use the proper identity with the proper permissions? I've looked high and low for any answers regarding this specific problem, but found nil so far... I hope you can help!

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • How do I authenticate an ADO.NET Data Service?

    - by lsb
    Hi! I've created an ADO.Net Data Service hosted in a Azure worker role. I want to pass credentials from a simple console client to the service then validate them using a QueryInterceptor. Unfortunately, the credentials don't seem to be making it over the wire. The following is a simplified version of the code I'm using, starting with the DataService on the server: using System; using System.Data.Services; using System.Linq.Expressions; using System.ServiceModel; using System.Web; namespace Oslo.Worker { [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)] public class AdminService : DataService<OsloEntities> { public static void InitializeService( IDataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.All); config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); } [QueryInterceptor("Pairs")] public Expression<Func<Pair, bool>> OnQueryPairs() { // This doesn't work!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! if (HttpContext.Current.User.Identity.Name != "ADMIN") throw new Exception("Ooops!"); return p => true; } } } Here's the AdminService I'm using to instantiate the AdminService in my Azure worker role: using System; using System.Data.Services; namespace Oslo.Worker { public class AdminHost : DataServiceHost { public AdminHost(Uri baseAddress) : base(typeof(AdminService), new Uri[] { baseAddress }) { } } } And finally, here's the client code. using System; using System.Data.Services.Client; using System.Net; using Oslo.Shared; namespace Oslo.ClientTest { public class AdminContext : DataServiceContext { public AdminContext(Uri serviceRoot, string userName, string password) : base(serviceRoot) { Credentials = new NetworkCredential(userName, password); } public DataServiceQuery<Order> Orders { get { return base.CreateQuery<Pair>("Orders"); } } } } I should mention that the code works great with the signal exception that the credentials are not being passed over the wire. Any help in this regard would be greatly appreciated! Thanks....

    Read the article

  • Netbeans, JPA Entity Beans in seperate projects. Unknown entity bean class

    - by Stu
    I am working in Netbeans and have my entity beans and web services in separate projects. I include the entity beans in the web services project however the ApplicaitonConfig.java file keeps getting over written and removing the entries I make for the entity beans in the associated jar file. My question is: is it required to have both the EntityBeans and the WebServices share the same project/jar file? If not what is the appropriate way to include the entity beans which are in the jar file? <?xml version="1.0" encoding="UTF-8"?> <persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"> <persistence-unit name="jdbc/emrPool" transaction-type="JTA"> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <jta-data-source>jdbc/emrPool</jta-data-source> <exclude-unlisted-classes>false</exclude-unlisted-classes> <properties> <property name="eclipselink.ddl-generation" value="none"/> <property name="eclipselink.cache.shared.default" value="false"/> </properties> </persistence-unit> </persistence> Based on Melc's input i verified that that the transaction type is set to JTA and the jta-data-source is set to the value for the Glassfish JDBC Resource. Unfortunately the problem still persists. I have opened the WAR file and validated that the EntityBean.jar file is the latest version and is located in the WEB-INF/lib directory of the War file. I think it is tied to the fact that the entities are not being "registered" with the entity manager. However i do not know why they are not being registered.

    Read the article

  • Sharing NSMutableArray

    - by Lucas Veiga
    Hey everbody, im trying to share a NSMutableArray to another xib. The cartList.codigo is a NSMutableArray from a shared class, according to James Brannan's tutorial (http://www.vimeo.com/12164589). When i count it, it gives me 1. But when i load the another view, gives me 0. Whats wrong? View that adds: self.produtoCodigo = [[NSMutableArray alloc]init]; [self.produtoCodigo addObject:@"aa"]; CartViewController *carrinho = [[CartViewController alloc] initWithNibName:@"CartViewController" bundle:[NSBundle mainBundle]]; CartList *lista = [[CartList alloc] init]; carrinho.cartList = lista; carrinho.cartList.codigo = self.produtoCodigo; NSLog(@"QTD %i", [carrinho.cartList.codigo count]); View that wants to load the item added: - (void) viewDidAppear:(BOOL)animated { self.produtoCodigo = self.cartList.codigo; NSLog(@"%i", [self.produtoCodigo count]); [super viewDidLoad]; } Im loading the CartList class in both. Thanks!

    Read the article

  • ASP.NET or PHP: Is Memcached useful for storing user-state information?

    - by hamlin11
    This question may expose my ignorance as a web developer, but that wouldn't exactly be a bad thing for me now would it? I have the need to store user-state information. Examples of information that I need to store per user. (define user: unauthenticated visitor) User arrived to the site from google/bing/yahoo User utilized the search feature (true/false) List of previous visited product pages on current visit It is my understanding that I could store this in the view state, but that causes a problem with page load from the end-users' perspective because a significant amount of non-viewable information is being transferred to and from the end-users even though the server is the only side that needs the info. On a similar note, it is my understanding that the session state can be used to store such information, but does not this also result in the same information being transferred to the user and stored in their cookie? (Not quite as bad as viewstate, but it does not feel ideal). This leaves me with either a server-only-session storage system or a mem-caching solution. Is memcached the only good option here?

    Read the article

  • Stable/repeatable random sort (MySQL, Rails)

    - by Matt Rogish
    I'd like to paginate through a randomly sorted list of ActiveRecord models (rows from MySQL database). However, this randomization needs to persist on a per-session basis, so that other people that visit the website also receive a random, paginate-able list of records. Let's say there are enough entities (tens of thousands) that storing the randomly sorted ID values in either the session or a cookie is too large, so I must temporarily persist it in some other way (MySQL, file, etc.). Initially I thought I could create a function based on the session ID and the page ID (returning the object IDs for that page) however since the object ID values in MySQL are not sequential (there are gaps), that seemed to fall apart as I was poking at it. The nice thing is that it would require no/minimal storage but the downsides are that it is likely pretty complex to implement and probably CPU intensive. My feeling is I should create an intersection table, something like: random_sorts( sort_id, created_at, user_id NULL if guest) random_sort_items( sort_id, item_id, position ) And then simply store the 'sort_id' in the session. Then, I can paginate the random_sorts WHERE sort_id = n ORDER BY position LIMIT... as usual. Of course, I'd have to put some sort of a reaper in there to remove them after some period of inactivity (based on random_sorts.created_at). Unfortunately, I'd have to invalidate the sort as new objects were created (and/or old objects being removed, although deletion is very rare). And, as load increases the size/performance of this table (even properly indexed) drops. It seems like this ought to be a solved problem but I can't find any rails plugins that do this... Any ideas? Thanks!!

    Read the article

  • Efficient mapping of game entity positions in Java

    - by byte
    In Java (Swing), say I've got a 2D game where I have various types of entities on the screen, such as a player, bad guys, powerups, etc. When the player moves across the screen, in order to do efficient checking of what is in the immediate vicinity of the player, I would think I'd want indexed access to the things that are near the character based on their position. For example, if player 'P' steps onto element 'E' in the following example... | | | | | | | | | |P| | | | |E| | | | | | | | | ... would be to do something like: if(player.getPosition().x == entity.getPosition().x && entity.getPosition.y == thing.getPosition().y) { //do something } And thats fine, but that implies that the entities hold their positions, and therefor if I had MANY entities on the screen I would have to loop through all possible entities available and check each ones position against the player position. This seems really inefficient especially if you start getting tons of entities. So, I would suspect I'd want some sort of map like Map<Point, Entity> map = new HashMap<Point, Entity>(); And store my point information there, so that I could access these entities in constant time. The only problem with that approach is that, if I want to move an entity to a different point on the screen, I'd have to search through the values of the HashMap for the entity I want to move (inefficient since I dont know its Point position ahead of time), and then once I've found it remove it from the HashMap, and re-insert it with the new position information. Any suggestions or advice on what sort of data structure / storage format I ought to be using here in order to have efficient access to Entities based on their position, as well as Position's based on the Entity?

    Read the article

  • Interesting Scala typing solution, doesn't work in 2.7.7?

    - by djc
    I'm trying to build some image algebra code that can work with images (basically a linear pixel buffer + dimensions) that have different types for the pixel. To get this to work, I've defined a parametrized Pixel trait with a few methods that should be able to get used with any Pixel subclass. (For now, I'm only interested in operations that work on the same Pixel type.) Here it is: trait Pixel[T <: Pixel[T]] { def mul(v: Double): T def max(v: T): T def div(v: Double): T def div(v: T): T } Now I define a single Pixel type that has storage based on three doubles (basically RGB 0.0-1.0), I've called it TripleDoublePixel: class TripleDoublePixel(v: Array[Double]) extends Pixel[TripleDoublePixel] { var data: Array[Double] = v def this() = this(Array(0.0, 0.0, 0.0)) def toString(): String = { "(" + data(0) + ", " + data(1) + ", " + data(2) + ")" } def increment(v: TripleDoublePixel) { data(0) += v.data(0) data(1) += v.data(1) data(2) += v.data(2) } def mul(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x * v)) } def div(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x / v)) } def div(v: TripleDoublePixel): TripleDoublePixel = { var tmp = new Array[Double](3) tmp(0) = data(0) / v.data(0) tmp(1) = data(1) / v.data(1) tmp(2) = data(2) / v.data(2) new TripleDoublePixel(tmp) } def max(v: TripleDoublePixel): TripleDoublePixel = { val lv = data(0) * data(0) + data(1) * data(1) + data(2) * data(2) val vv = v.data(0) * v.data(0) + v.data(1) * v.data(1) + v.data(2) * v.data(2) if (lv > vv) (this) else v } } Now I want to write code to use this, that doesn't have to know what type the pixels are. For example: def idiv[T](a: Image[T], b: Image[T]) { for (i <- 0 until a.data.size) { a.data(i) = a.data(i).div(b.data(i)) } } Unfortunately, this doesn't compile: (fragment of lindet-gen.scala):145: error: value div is not a member of T a.data(i) = a.data(i).div(b.data(i)) I was told in #scala that this worked for someone else, but that was on 2.8. I've tried to get 2.8-rc1 going, but it doesn't compile for me. Is there any way to get this to work in 2.7.7?

    Read the article

< Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >