Search Results

Search found 6001 results on 241 pages for 'requires'.

Page 197/241 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Speed/expensive of SQLite query vs. List.contains() for "in-set" icon on list rows

    - by kpdvx
    An application I'm developing requires that the app main a local list of things, let's say books, in a local "library." Users can access their local library of books and search for books using a remote web service. The app will be aware of other users of the app through this web service, and users can browse other users' lists of books in their library. Each book is identified by a unique bookId (represented as an int). When viewing books returned through a search result or when viewing another user's book library, the individual list row cells need to visually represent if the book is in the user's local library or not. A user can have at most 5,000 books in the library, stored in SQLite on the device (and synchronized with the remote web service). My question is, to determine if the book shown in the list row is in the user's library, would it be better to directly ask SQLite (via SELECT COUNT(*)...) or to maintain, in-memory, a List or int[] array of some sort containing the unique bookIds. So, on each row display do I query SQLite or check if the List or int[] array contains the unique bookId? Because the user can have at most 5,000 books, each bookId occupies 4 bytes so at most this would use ~ 20kB. In thinking about this, and in typing this out, it seems obvious to me that it would be far better for performance if I maintained a list or int[] array of in-library bookIds vs. querying SQLite (the only caveat to maintaining an int[] array is that if books are added or removed I'll need to grow or shrink the array by hand, so with this option I'll most likely use an ArrayList or Vector, though I'm not sure of the additional memory overhead of using Integer objects as opposed to primitives). Opinions, thoughts, suggestions?

    Read the article

  • Merging ILists to bind on datagridview to avoid using a database view

    - by P.Bjorklund
    In the form we have this where IntaktsBudgetsType is a poorly named enum that only specifies wether to populate the datagridview after customer or product (You do the budgeting either after product or customer) private void UpdateGridView() { bs = new BindingSource(); bs.DataSource = intaktsbudget.GetDataSource(this.comboBoxKundID.Text, IntaktsBudgetsType.PerKund); dataGridViewIntaktPerKund.DataSource = bs; } This populates the datagridview with a database view that merge the product, budget and customer tables. The logic has the following method to get the correct set of IList from the repository which only does GetTable<T>.ToList<T> public IEnumerable<IntaktsBudgetView> GetDataSource(string id, IntaktsBudgetsType type) { IList<IntaktsBudgetView> list = repository.SelectTable<IntaktsBudgetView>(); switch (type) { case IntaktsBudgetsType.PerKund: return from i in list where i.kundId == id select i; case IntaktsBudgetsType.PerProdukt: return from i in list where i.produktId == id select i; } return null; } Now I don't want to use a database view since that is read-only and I want to be able to perform CRUD actions on the datagridview. I could build a class that acts as a wrapper for the whole thing and bind the different table values to class properties but that doesn't seem quite right since I would have to do this for every single thing that requires "the merge". Something pretty important (and probably basic) is missing the the thought process but after spending a weekend on google and in books I give up and turn to the SO community.

    Read the article

  • How can I copy files in the middle of a build in Team System?

    - by Dana
    I have two solutions that I want to include in a build. Solution two requires the dll's from solution one to successfully build. Solution two has a Binaries folder where the dll's from solution one need to be copied before building Solution two. I've been trying an AfterBuild Target, hoping that it would copy the items after the first SolutionToBuild, but it doesn't fire then. I'm guessing that it would probably fire after both solutions have compiled, but that's not what I want. <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Main/Framework.sln"> <Targets>AfterCompileFramework</Targets> <Properties></Properties> </SolutionToBuild> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../../Dashboard/Main/Dashboard.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> <ItemGroup> <FrameworkBinaries Include="$(DropLocation)\$(BuildNumber)\Release\Framework.*.dll"/> </ItemGroup> <Message Text="FrameworkBinaries: @(FrameworkBinaries)" Importance="high"/> <Copy SourceFiles="@(FrameworkBinaries)" DestinationFolder="$(BuildProjectFolderPath)/../../../Dashboard/Main/Binaries"/>

    Read the article

  • Reference remotely located assembly (web uri) from locally installed application?

    - by moonground.de
    Hi Stackoverflowers! :) We have a .NET application for Windows which is installed locally by Microsoft Installer. Now we have the need to use additional assemblies which are located online at our Web Servers. We'd like to refer to a remote uri like https://www.ourserver.com/OurProductName/ExternalLib.dll and reveal additional functionality, which is described roughly by a known common ("AddIn/Plugin") Interface. These are not 3rd Party Plugins, we just want be able to exchange parts of the application frequently, without the need to have frequent software updates. Our first idea was to add some kind of "remote refence" in Visual Studio by setting the path to the remote assembly uri. But Visual Studio downloaded the assembly immediately to a temporary directory, adding a reference to it. Our second attempt then, is simply using a WebRequest (or WebClient) to retrieve a binary stream of the Assembly, loading it "from image" by using Assembly.Load(...). This actually works, but is not very elegant and requires more additional programming for verification etc. We hoped Clickonce would provide useful techniques but apparently it's suitable for standalone applications only. (Correct me?) Is there a way (.net native or by framework/api) to reference remotely located assemblies? Thanks in advance and have a happy easter!

    Read the article

  • using replace to produce javascript code, django

    - by durdenk
    I want to use highcharts with my django site but it requires a comlex javascript code such as below. So I wanted to get this script in my python code and replace apropriate portions then write it in my template, first question is, is this a dump way to do that for a person not knowing javascript. I can read it tough. Second question is, Why I cant replace this string. Lets say this string is a variable like this. lineChartsTemplate = """ ... ... """ if I try and do lineChartsTemplate .replace('dataCategory', dataCategory) it basically suppossed to change dataCategory text with my dataCategory variable, but no such luck. I need guidance here. thx. $(function () { var chart = new Highcharts.Chart({ chart: { renderTo: 'container', type: 'bar' }, xAxis: { categories: dataCategory }, yAxis: { }, legend: { layout: 'vertical', floating: true, backgroundColor: '#FFFFFF', align: 'right', verticalAlign: 'top', y: 60, x: -60 }, tooltip: { formatter: function() { return '<b>'+ this.series.name +'</b><br/>'+ this.x +': '+ this.y; } }, plotOptions: { }, series: [{ data: dataList , name : 'Satislar'}] }); });

    Read the article

  • How to convert a lambda to an std::function using templates

    - by retep998
    Basically, what I want to be able to do is take a lambda with any number of any type of parameters and convert it to an std::function. I've tried the following and neither method works. std::function([](){});//Complains that std::function is missing template parameters template <typename T> void foo(function<T> f){} foo([](){});//Complains that it cannot find a matching candidate The following code does work however, but it is not what I want because it requires explicitly stating the template parameters which does not work for generic code. std::function<void()>([](){}); I've been mucking around with functions and templates all evening and I just can't figure this out, so any help would be much appreciated. As mentioned in a comment, the reason I'm trying to do this is because I'm trying to implement currying in C++ using variadic templates. Unfortunately, this fails horribly when using lambdas. For example, I can pass a standard function using a function pointer. template <typename R, typename...A> void foo(R (*f)(A...)) {} void bar() {} int main() { foo(bar); } However, I can't figure out how to pass a lambda to such a variadic function. Why I'm interested in converting a generic lambda into an std::function is because I can do the following, but it ends up requiring that I explicitly state the template parameters to std::function which is what I am trying to avoid. template <typename R, typename...A> void foo(std::function<R(A...)>) {} int main() { foo(std::function<void()>([](){})); }

    Read the article

  • Better code for accessing fields in a matlab structure array?

    - by John
    I have a matlab structure array Modles1 of size (1x180) that has fields a, b, c, ..., z. I want to understand how many distinct values there are in each of the fields. i.e. max(grp2idx([foo(:).a])) The above works if the field a is a double. {foo(:).a} needs to be used in the case where the field a is a string/char. Here's my current code for doing this. I hate having to use the eval, and what is essentially a switch statement. Is there a better way? names = fieldnames(Models1); for ix = 1 : numel(names) className = eval(['class(Models1(1).',names{ix},')']); if strcmp('double', className) || strcmp('logical',className) eval([' values = [Models1(:).',names{ix},'];']); elseif strcmp('char', className) eval([' values = {Models1(:).',names{ix},'};']); else disp(['Unrecognized class: ', className]); end % this line requires the statistics toolbox. [g, gn, gl] = grp2idx(values); fprintf('%30s : %4d\n',names{ix},max(g)); end

    Read the article

  • PHP OO vs Procedure with AJAX

    - by vener
    I currently have a AJAX heavy(almost everything) intranet webapp for a business. It is highly modularized(components and modules ala Joomla), with plenty of folders and files. ~ 80-100 different viewing pages (each very unique in it's own sense) on last count and will likely to increase in the near future. I based around the design around commands and screens, the client request a command and sends the required data and receives the data that is displayed via javascript on the screen. That said, there are generally two types of files, a display files with html, javascript, and a little php for templating. And also a php backend file with a single switch statement with actions such as, save, update and delete and maybe other function. There is very little code reuse. Recently, I have been adding an server sided undo function that requires me to reuse some code. So, I took the chance to try out OOP but I notice that some functions are so simple, that creating a class, retrieving all the data then update all the related rows on the database seems like overkill for a simple action as speed is quite critical. Also I noticed there is only one class in an entire file. So, what if the entire php is a class. So, between creating a class and methods, and using global variables and functions. Which is faster?

    Read the article

  • Best solution for __autoload

    - by tpk
    As our PHP5 OO application grew (in both size and traffic), we decided to revisit the __autoload() strategy. We always name the file by the class definition it contains, so class Customer would be contained within Customer.php. We used to list the directories in which a file can potentially exist, until the right .php file was found. This is quite inefficient, because you're potentially going through a number of directories which you don't need to, and doing so on every request (thus, making loads of stat() calls). Solutions that come to my mind... -use a naming convention that dictates the directory name (similar to PEAR). Disadvantages: doesn't scale too great, resulting in horrible class names. -come up with some kind of pre-built array of the locations (propel does this for its __autoload). Disadvantage: requires a rebuild before any deploy of new code. -build the array "on the fly" and cache it. This seems to be the best solution, as it allows for any class names and directory structure you want, and is fully flexible in that new files just get added to the list. The concerns are: where to store it and what about deleted/moved files. For storage we chose APC, as it doesn't have the disk I/O overhead. With regards to file deletes, it doesn't matter, as you probably don't wanna require them anywhere anyway. As to moves... that's unresolved (we ignore it as historically it didn't happen very often for us). Any other solutions?

    Read the article

  • Recommended approach for error handling with PHP and MYSQL

    - by iama
    I am trying to capture database (MYSQL) errors in my PHP web application. Currently, I see that there are functions like mysqli_error(), mysqli_errno() for capturing the last occurred error. However, this still requires me to check for error occurrence using repeated if/else statements in my php code. You may check my code below to see what I mean. Is there a better approach to doing this? (or) Should I write my own code to raise exceptions and catch them in one single place? What is the recommended approach? Also, does PDO raise exceptions? Thanks. function db_userexists($name, $pwd, &$dbErr) { $bUserExists = false; $uid = 0; $dbErr = ''; $db = new mysqli(SERVER, USER, PASSWORD, DB); if (!mysqli_connect_errno()) { $query = "select uid from user where uname = ? and pwd = ?"; $stmt = $db->prepare($query); if ($stmt) { if ($stmt->bind_param("ss", $name, $pwd)) { if ($stmt->bind_result($uid)) { if ($stmt->execute()) { if ($stmt->fetch()) { if ($uid) $bUserExists = true; } } } } if (!$bUserExists) $dbErr = $db->error(); $stmt->close(); } if (!$bUserExists) $dbErr = $db->error(); $db->close(); } else { $dbErr = mysqli_connect_error(); } return $bUserExists; }

    Read the article

  • $_SERVER['HTTP_REFERER'] not working with Header

    - by EmmyS
    I have a site that allows public access to some pages, but requires a login for others. I have a link to the login from all pages, and what I'd like to do after a successful login is send the user back to the page they were on when they clicked the login link. I know the HTTP_REFERER can be spoofed, and sometimes stripped out by certain hosts and proxies, but since it's strictly within my own site, and only a convenience for users, I'm not too worried about it. I am curious about why it isn't working in conjunction with a redirect, though. I've set a visible field to contain the value of the http referer, and it displays correctly. So the page is getting the value of the referrer variable. But when I try this: $home_url = $_SERVER['HTTP_REFERER']; header('Location: ' . $home_url); it doesn't work. This, on the other hand, does: $home_url = 'http://' . $_SERVER['HTTP_HOST'].dirname($_SERVER['PHP_SELF']).'/discussions.php'; header('Location: ' . $home_url); So I know the header location part works. Any idea why it doesn't want to work in conjunction with the http_referer variable? (Also, does it drive anyone else nuts that referer is spelled incorrectly? I keep mistyping it using the OED spelling, silly me...)

    Read the article

  • File write not getting updated in Qt 4.5.3

    - by user249490
    Hi, I have an XML file. My application requires manipulation into that XML file. I will be writing the values into the XML value. I also have interface to display the read contents of the file. The user might add values into that XML (through an interface). Without closing the application he may decide to display the File contents also. Now the problem is, after i write the XML contents into the file, when i view the file through the interface , the values are not getting updated. After i close the application and open it the updated values are available.I am using the following code to achieve this. For writing QXmlStreamWriter and for reading QDomDocument, QDomNodeList. After i complete the writing, I flush and close the file too. lFile.flush(); lFile.close(); After reading also i closed the file. Can someone tell me what am doing wrong??

    Read the article

  • Caching vector addition over changing collections

    - by DRMacIver
    I have the following setup: I have a largish number of uuids (currently about 10k but expected to grow unboundedly - they're user IDs) and a function f : id - sparse vector with 32-bit integer values (no need to worry about precision). The function is reasonably expensive (not outrageously so, but probably on the order of a few 100ms for a given id). The dimension of the sparse vectors should be assumed to be infinite, as new dimensions can appear over time, but in practice is unlikely to ever exceed about 20k (and individual results of f are unlikely to have more than a few hundred non-zero values). I want to support the following operations efficiently: add a new ID to the collection invalidate an existing ID retrieve sum f(id) in O(changes since last retrieval) i.e. I want to cache the sum of the vectors in a way that's reasonable to do incrementally. One option would be to support a remove ID operation and treat invalidation as a remove followed by an add. The problem with this is that it requires us to keep track of all the old values of f, which is expensive in space. I potentially need to use many instances of this sort of cached structure, so I would like to avoid that. The likely usage pattern is that new IDs are added at a fairly continuous rate and are frequently invalidated at first. Ids which have been invalidated recently are much more likely to be invalidated again than ones which have remained valid for a long time, but in principle an old Id can still be invalidated. Ideally I don't want to do this in memory (or at least I want a way that lets me save the result to disk efficiently), so an idea which lets me piggyback off an existing DB implementation of some sort would be especially appreciated.

    Read the article

  • Sort queryset by a generic foreign key (django)?

    - by thornomad
    I am using Django's comment framework which utilizes generic foreign keys. Question: How do I sort a given model's queryset by their comment count using the generic foreign key lookup? Reading the django docs on the subject it says one needs to calculate them not using the aggregation API: Django's database aggregation API doesn't work with a GenericRelation. [...] For now, if you need aggregates on generic relations, you'll need to calculate them without using the aggregation API. The only way I can think of, though, would be to iterate through my queryset, generate a list with content_type and object_id's for each item, then run a second queryset on the Comment model filtering by this list of content_type and object_id ... sort the objects by the count, then re-create a new queryset in this order by pulling the content_object for each comment ... This just seems wrong and I'm not even sure how to pull it off. Ideas? Someone must have done this before. I found this post online but it requires me handwriting SQL -- is that really necessary ?

    Read the article

  • Django templatetag "order of processing"

    - by Jason Persampieri
    I am trying to write a set of template tags that allow you to easily specify js and css files from within the template files themselves. Something along the lines of {% requires global.css %}, and later in the request, {% get_required_css %}. I have this mostly working, but there are a couple of issues. We'll start with the 'timing' issues. Each template tag is made up of two steps, call/init and render. Every call/init happens before any render procedure is called. In order to guarantee that all of the files are queued before the {% get_required_css %} is rendered, I need to build my list of required files in the call/init procedures themselves. So, I need to collect all of the files into one bundle per request. The context dict is obviously the place for this, but unfortunately, the call/init doesn't have access to the context variable. Is this making sense? Anyone see a way around this (without resorting to a hack-y global request object)? Another possibility to store these in a local dict but they would still need to be tied to the request somehow... possibly some sort of {% start_requires %} tag? But I have no clue how to make that work either.

    Read the article

  • I cannot seem to load an XML document using ASP (Classic), IIS6. Details inside.

    - by carny666
    So I am writing a web application for use within my organization. The application requires that it know who the current user is. This is done by calling the Request.ServerVariables("AUTH_USER") function, which works great as long as 'Anonymous Access' is disabled (unchecked) and 'Integrated Windows Authentication' is enabled (checked) within IIS for this subweb. Unfortunately by doing this I get an 'Access Denied' error when I hit the load method of the XML DOM. Example code: dim urlToXmlFile urlToXmlFile = "http://currentwebserver/currentsubweb/nameofxml.xml" dim xmlDom set xmlDom = Server.CreateObject("MSXML2.DOMDocument") xmlDom.async = false xmlDom.load( urlToXmlFile ) ' <-- this is where I get the error! I've looked everywhere and cannot find a solution. I should be able to load an XML file into the DOM regardless of the authentication method. Any help would be appreciated. So far the only two solutions I can come up with are: a) create a new subweb that JUST gets the current user name and somehow passes it back to my XML reading subweb. b) open up security on the entire system to 'Everyone', which works but our IS department wouldn't care for that.

    Read the article

  • How do I use XPath with a default namespace with no prefix?

    - by Scott Stafford
    What is the XPath (in C# API to XDocument.XPathSelectElements(xpath, nsman) if it matters) to query all MyNodes from this document? <?xml version="1.0" encoding="utf-8"?> <configuration> <MyNode xmlns="lcmp" attr="true"> <subnode /> </MyNode> </configuration I tried /configuration/MyNode which is wrong because it ignores the namespace. I tried /configuration/lcmp:MyNode which is wrong because lcmp is the URI, not the prefix. I tried /configuration/{lcmp}MyNode which failed because Additional information: '/configuration/{lcmp}MyNode' has an invalid token. EDIT: I can't use mgr.AddNamespace("df", "lcmp"); as some of the answerers have suggested. That requires that the XML parsing program know all the namespaces I plan to use ahead of time. Since this is meant to be applicable to any source file, I don't know which namespaces to manually add prefixes for. It seems like {my uri} is the XPath syntax, but Microsoft didn't bother implementing that... true?

    Read the article

  • Windows Phone - failing to get a string from a website with login information

    - by jumantyn
    I am new to accessing web services with Windows Phone 7/8. I'm using a WebClient to get a string from a php-website. The site returns a JSON string but at the moment I'm just trying to put it into a TextBox as a normal string just to test if the connection works. The php-page requires an authentication and I think that's where my code is failing. Here's my code: WebClient client = new WebClient(); client.Credentials = new NetworkCredential("myUsername", "myPassword"); client.DownloadStringCompleted += new DownloadStringCompletedEventHandler(client_DownloadStringCompleted); client.DownloadStringAsync(new Uri("https://www.mywebsite.com/ba/php/jsonstuff.php")); void client_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { try { string data = e.Result; this.jsonText.Text = data; } catch (Exception ex) { System.Diagnostics.Debug.WriteLine(ex.Message); } } This returns first a WebException and then a TargetInvocationException. If I replace the Uri with for example "http://www.google.com/index.html" the jsonText TextBox gets filled with html text from Google (oddly enough, this also works even when the WebClient credentials are still set). So is the problem in the setting of the credentials? I couldn't find any good results when searching for guides on how to access php-pages with credentials, only without them. Then I found a short mention somewhere to use the WebClient.Credentials property. But should it work some other way? Update: here's what I can get out of the WebException (sorry for the bad formatting): System.Net.WebException: The remote server returned an error: NotFound. ---System.Net.WebException: The remote server returned an error: NotFound. at System.Net.Browser.ClientHttpWebRequest.InternalEndGetResponse(IAsyncResult asyncResult) at System.Net.Browser.ClientHttpWebRequest.<c_DisplayClasse.b_d(Object sendState) at System.Net.Browser.AsyncHelper.<c_DisplayClass1.b_0(Object sendState) --- End of inner exception stack trace --- at System.Net.Browser.AsyncHelper.BeginOnUI(SendOrPostCallback beginMethod, Object state) at System.Net.Browser.ClientHttpWebRequest.EndGetResponse(IAsyncResult asyncResult) at System.Net.WebClient.GetWebResponse(WebRequest request, IAsyncResult result) at System.Net.WebClient.DownloadBitsResponseCallback(IAsyncResult result)

    Read the article

  • j_security_check to SSO in different module under Oracle App Server?

    - by thebearinboulder
    I have an existing j2ee application running on Oracle App Server. It is targeted towards paying customers so the content is secured and a SSO module properly intercepts all requests for secured content. Now the company is adding a unbranded public-facing module with a number of unsecured pages. At one point the user is expected to register for a free account and log in to proceed further. Think doctors adding a public-facing site with information for potential patients, or lawyers adding a public-facing site with information for potential clients. There's some information on the session and the usual approach would be to authenticate the user, persist the session information using the now-known user id, invalidate the existing session (to prevent certain types of attacks), the reload the session information before returning to the user. I can't just persist it under the session id since that's about to change. The glitch is that the existing application already has an SSO module and I get a 404 error every time I try to direct to j_security_check. I've tried that, /sso/j_security_check, even http://localhost/sso/j_security_check, all without success. I noticed that an earlier question said that tomcat requires access to a secured page before j_security_check is even visible. I don't know if that's the case with Oracle AS. Ideas? Or is the best approach to continue arguing that we have a different user base so it would be better to handle authentication in our own module anyway?

    Read the article

  • Using the AutoComplete feature of ComboBox, while limiting values to those in the list?

    - by Schmuli
    In WinForms 2.0, a ComboBox has an Auto-Complete feature, that displays a custom Drop-Down list with only the values that start with the entered text. However, if I want to limit valid values to only those that appear in the ComboBox's list of items, I can do that by setting the DropDownStyle to DropDownList, which stops the user from entering a value. However, now I can't use the Auto-Complete feature, which requires user input. Is there another way to limit input to the list, while still allowing use of the Auto-Complete feature? Note that I have seen some custom solutions for this, but I really like the way the matching Auto-Complete items are displayed in a Drop-Down list, and sorted even though the original list may not be. EDIT: I have thought about just validating the entered value, i.e. testing user input if it is valid in, say, the TextChanged event, or even using the Validating event. The question then is what is the expected behavior? Do I clear their value (an empty value is also invalid), or do I use a default value? Closest matching value? P.s. Is there any other tags that I could add to this question?

    Read the article

  • How good idea is it to use code contracts in Visual Studio 2010 Professional (ie. no static checking

    - by Lasse V. Karlsen
    I create class libraries, some which are used by others around the world, and now that I'm starting to use Visual Studio 2010 I'm wondering how good idea it is for me to switch to using code contracts, instead of regular old-style if-statements. ie. instead of this: if (String.IsNullOrWhiteSpace(fileName)) throw new ArgumentNullException("fileName"); (yes, I know, if it is whitespace, it isn't strictly null) use this: Contract.Requires(!String.IsNullOrWhiteSpace(fileName)); The reason I'm asking is that I know that the static checker is not available to me, so I'm a bit nervous about some assumptions that I make, that the compiler cannot verify. This might lead to the class library not compiling for someone that downloads it, when they have the static checker. This, coupled with the fact that I cannot even reproduce the problem, would make it tiresome to fix, and I would gather that it doesn't speak volumes to the quality of my class library if it seemingly doesn't even compile out of the box. So I have a few questions: Is the static checker on by default if you have access to it? Or is there a setting I need to switch on in the class library (and since I don't have the static checker, I won't) Are my fears unwarranted? Is the above scenario a real problem? Any advice would be welcome.

    Read the article

  • PHP Mailer Class - Securing Email Credentials

    - by Alan A
    I am using the php mailer class to send email via my scripts. The structure is as follows: $mail = new PHPMailer; $mail->IsSMTP(); // Set mailer to use SMTP $mail->Host = 'myserver.com'; // Specify main and backup server $mail->SMTPAuth = true; // Enable SMTP authentication $mail->Username = '[email protected]'; // SMTP username $mail->Password = 'user123'; // SMTP password $mail->SMTPSecure = 'pass123'; It seems to me to be a bit of a security hole having the mailbox credentials in plain view. So I thought I might put these in an external file outside of the web root. My question is how would I then assign the $mail object these values. I of course no how to use include and/or requires... would it simple be a case of.... $mail->IsSMTP(); // Set mailer to use SMTP $mail->Host = 'myserver.com'; // Specify main and backup server $mail->SMTPAuth = true; // Enable SMTP authentication includes '../locationOutsideWebroot/emailCredntials.php'; $mail->SMTPSecure = 'pass123'; Then emailCredentails.php: <?php $mail->Username = '[email protected]'; $mail->Password = 'user123'; ?> Would this be sufficient and secure enough? Thanks, Alan.

    Read the article

  • Storing strings from a text file/scanner. Array or built-in?

    - by Flowdorio
    This code snippet is to to read a text file, turn the lines into objects through a different public (non changeable) class(externalClass). The external class does nothing but turn strings (lines from the .txt through nextLine) into objects, and is fully functional. The scanner(scanner3) is assigned to the text file. while (scanner3.hasNext()) { externalClass convertedlines = new externalClass(scanner3.nextLine()); I'm not new to programming, but as I'm new to java, I do not know if this requires me to create an array, or if the returned objects are sorted in some other way. i.e is the "importedlines" getting overwritten with each run of the loop(and I need to introduce an array into the loop), or are the objects stored in some way? The question may seem strange, but with the program I am making it would be harder (but definitely not impossible) if I used an array. Any help would be appreciated. As requested, externalClass: public class exernalClass { private String line; externalClass(String inLine){ line = inLine; } public String giveLine() { return line; } }

    Read the article

  • Stopping process in /etc/inittab kills spawned process. Doesn't happen in rc.local.

    - by Paul
    Hi, I'm trying to execute a firmware upgrade while my programming is running in inittab. My program will run 2 commands. One to extract the installer script from the tarball and the other to execute the installer script. In my code I'm using the system() function call. These are the 2 command strings below, system ( "tar zvxf tarball.tar.gz -C / installer.sh 2>&1" ); system( "nohup installer.sh tarball >/dev/null 2>&1 &" ); The installer script requires the tarball to be an argument. I've tried using sudo but i still have the same problem. I've tried nohup with no success. The installer script has to kill my program when doing the firmware upgrade but the installer script will stay alive. If my program is run from the command line or rc.local, on my target device, my upgrade works fine, i.e. when my program is killed my installer script continues. But I need to run my program from /etc/inittab so it can respawn if it dies. To stop my program in inittab the installer script will hash it out and execute "telinit q". This is where my program dies (but thats what I want it to do), but it also kills my installer script. Does anyone know why this is happening and what can I do to solve it? Thanks in advance.

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >