Search Results

Search found 4468 results on 179 pages for 'zone transfer'.

Page 127/179 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • EC2 Configuration

    - by user123683
    I am trying to create a server structure for my EC2 account. The design I have chosen consists of 2 instances running in different availability zones, elastic load balancer, an auto-scaling group with cloudwatch monitoring configured and a security group defining rules for access to the instances. This setup is to support an online web application written in PHP. I am trying to decide what is a better policy: Store MySQL DB on a separate Instance Store MySQL DB on an attached EBS volume (from what i know auto-scaling will not replicate the attached EBS volume but will generate new instances from a chosen AMI - is this view correct?) Regards the AMI I plan to use a basic Amazon linux 64 bit AMI, and install bastille (maybe OSSEC) but I am looking to also use an encrypted file system. Are there any issues using an encrypted file system and communication between the DB and webapp i neeed to be aware of? Are there any comms issues using the encrypted filesystem on the instance housing the webapp I was going to launch a second instance or attach a second volume in the second availability zone to act as a standby for the database - I'm just looking for some suggestions about how to get the two DB's to talk - will this be a big task Regards updates for security is it best to create a recent snapshot and just relaunch and allow Amazon to install updates on launch or is the yum update mechanism a suitable alternative - is it better practice to relaunch instead of updates being installed which force a restart. I plan to create two AMI snapshots one for the app server and one for the DB each with the same security measures in place - is this a reasonable - I just figure it is a better policy than having additional applications that are unnecessary included in a AMI that I intend on using. My plan for backup is to create periodic snapshots of the webapp and DB instances (if I use an additional EBS volume instead of separate instances my understanding is that the EBS volume will persist in S3 storage in the event of an unexpected termination and I can create snapshots of the volume backup purposes). Thanks in advance for suggestions and advice. I am new to EC2 and I may have described unnecessary overkill but I want to try implement what can be considered a best practice solution so all advice is appreciated.

    Read the article

  • How can let Qt Graphics View Framework support custom layers

    - by jnblue
    Qt's graphics view frameworks is very powerful, but I have not found a way to support custom layers. In Qt, there is a QGraphicsScene::ItemLayer,but QGraphicsScene renders all items are in this layer. I want manage the items with several layers, Just like Illustrator and CorelDraw. all the item only in the current layer will receive the event, be selected or get the key focus etc.. Other layers(not current layer) will not receive all scene event. The most reasons of using layers is I could catalogue a large number of items more clearly.And without needing transfer events to all the layers' items ,I think the graphics frameworks will be more efficient. The last question, does QGraphicsView support rendering server stacked graphics scenes at the same time? If support, I think the "custom layers" can be solved in this way. Thanks very much!

    Read the article

  • Setup of high-end web server and DB server cluster on Amazon EC2: Is this how it's done?

    - by user1086584
    Amazon is so technical, I want to confirm that my understanding is correct. We have a large 500 GB database. (OrientDB.) We will have it mirrored to one another in the same Availability Zone. We believe the database size will grow rapidly. The plan is: Get 4 large instances that are compatible types with Placement Groups (as well as ideally, Enhanced Networking) (2 for web, 2 for DB.) We use an EBS-backed instances to store our operating system. Discussion here: http://alestic.com/2012/01/ec2-ebs-boot-recommended We can set up ephemeral SSD instance storage as swap space. (But it is lost after even a reboot. I hear its hard to add ephemeral storage if booting from EBS, but possible.) For offsite backup, we will take periodic snapshots and store them on S3. Obviously we need to ensure the database is in a safe state when that snapshot happens to avoid corruption. (Any hints here, aside from shutting down the DB?) If the database gets too big, we need to create a EBS volume that's larger. We can use RAID to break the 1 TB limit: http://alestic.com/2009/06/ec2-ebs-raid Static assets on web servers will be stored on S3. Is that correct? Or am I missing something?

    Read the article

  • MySQL auto increment

    - by mouthpiec
    Hi, I have table with an auto-increment field, but I need to transfer the table to another table on another database. Will the value of field 1 be 1, that of field 2 be 2, etc? Also in case the database get corrupted and I need to restore the data, will the auto-increment effect in some way? will the value change? (eg if the first row, id (auto-inc) = 1, name = john, country = UK .... will the id field remain 1?) I am asking because if other table refer to this value, all data will get out of sync if this field change.

    Read the article

  • How do I uncompress data in PHP which was originally compressed using zlib?

    - by Gaurav Arora
    Hello Everyone, I am quite new to Iphone development , so please bear me if I ask some some common questions. In my application I have to transfer data from my Iphone app to a PHP server and for this I have to compress the NSdata in my Iphone app and then pass it on to the PHP server and then Uncompress it in PHP and process the data sent by Iphone in PHP. For compressing the data in Iphone I have used zlib library.Now on PHP side I want to uncompress this data , but I am unable to do so. Can anyone help me in uncompressing this data in PHP. Thanks in Advance. Gaurav Arora

    Read the article

  • How to balance the root domain using NS records?

    - by Patrick McCurley
    I have two load balancers that balance incoming traffic across multiple data centers. These work fine. I can test them out by doing an 'nslookup mydomain.com xIP' I have now taken out DNS services with DYN.com to allow me to manage the DNS Zone file so that typing mydomain.com will ask my load balancers what the IP address is to resolve. Step 1 : the NS record for www. I set up A records (glue) for ns1 & ns2, then the corresponding NS record to delegate the DNS lookup to the balancers instead of DYN.com's nameservers. ns1.mydomain.com A [ip address of load balancer 1] ns2.mydomain.com A [ip address of load balancer 1] www.mydomain.com NS ns1.mydomain.com www.mydomain.com NS ns2.mydomain.com All is well - when I type www.mydomain.com, the requests get delegated to my load balancers who provide the IP address of the endpoint and the connect is made successfully. Step 2 : the NS record for root. This is where I run into problems. I need customers to be able to type 'mydomain.com' (without the www) and ALSO get delegated to the load balancers for the IP address. However - of the research I have done, and through the DYN control panel, it seems to be not allowed to provide an NS record for the root - as this overrides the default NS servers. How can i delegate both the root, and the www. to my load balancers?

    Read the article

  • WS 2008 R2 giving "Internal Server Error"

    - by dragon112
    I have had this problem for a while now and can't find the problem at all. When i open a page it will sometimes give a 500 Internal Server Error message. This hapens on a website that works perfectly but when i try to upload anything it will give this message(all php settings have been set to either 1gb or 3000 seconds as well as the iis headers). Also when i open a simple page which does nothing more than include another php page and include a couple of classes the error will occur. I have no idea what causes this error and would love to hear from any of you on what this could be. I checked the server logs and for the upload issue i found this error: The description for Event ID 1 from source named cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: managed-keys-zone ./IN: loading from master file managed-keys.bind failed: file not found the message resource is present but the message is not found in the string/message table Regards, Dragon

    Read the article

  • Error on windows using session from appengine-utilities

    - by fredrik
    Hi, I ran across an odd problem while trying to transfer a project to a windows machine. In my project I use a session handler (http://gaeutilities.appspot.com/session) it works fine on my mac but on windows I get: Traceback (most recent call last): File "C:\Program Files (x86)\Google\google_appengine\google\appengine\ext\webapp_init_.py", line 510, in call handler.get(*groups) File "C:\Development\Byggmax.Affiliate\bmaffiliate\admin.py", line 29, in get session = Session() File "C:\Development\Byggmax.Affiliate\bmaffiliate\appengine_utilities\sessions.py", line 547, in init self.cookie.load(string_cookie) File "C:\Python26\lib\Cookie.py", line 628, in load for k, v in rawdata.items(): AttributeError: 'unicode' object has no attribute 'items' Anyone familiar with the Session Handler that knows anything of this? All help are welcome! ..fredrik

    Read the article

  • How to determine if a file will be logically moved or physically moved.

    - by Frederic Morin
    The facts: When a file is moved, there's two possibilities: The source and destination file are on the same partition and only the file system index is updated The source and destination are on two different file system and the file need to be moved byte per byte. (aka copy on move) The question: How can I determine if a file will be either logically or physically moved ? I'm transferring large files (700+ megs) and would adopt a different behaviors for each situation. Edit: I've already coded a moving file dialog with a worker thread that perform the blocking io call to copy the file a meg at a time. It provide information to the user like rough estimate of the remaining time and transfer rate. The problem is: how do I know if the file can be moved logically before trying to move it physically ?

    Read the article

  • SQL Server Upgrade 'Developer > Enterprise'

    - by JD
    Hey guys, My company purchased Visual Studio Pro 2008 last year, which had a 'free' copy of SQL Server Developer, which I have been using for development. We are wanting to upgrade the copy of developer edition to enterprise (As we now want to use the server as a production server), and have purchased the licenses for this. Now... Morally we're in the clear... However does this comply with MS licensing T&C's? We have Developer installed how we want it, and don't really want to uninstall SQL Server Dev just to install SQL Server Ent. Is there a way to transfer the license key to our Enterprise key without having to reinstall? Thanks, JD

    Read the article

  • Populate Dynamically created ASPX Page

    - by Sandhurst
    Well The title might be a lil confusing, what I am currently doing is creating an aspx form dynamically and saving its data by using Server.Transfer("PrssPage.aspx"). On ProcessPage.aspx I am using the Previous Page property to save the data entered by the user using the dynamically created form. Each Dynamic Form is provided an ID for example 123.aspx Now what I want to achieve is to repopulate the dynamically created aspx page with the user input values from database, plz note here that I do not have an aspx.cs page getting dynamically generated. I am only generating aspx page. Any suggestion ?

    Read the article

  • Error in java code.

    - by user243680
    I am getting the following error when i try to use a blue tooth dongle to transfer a video file from pc to mobile phone. does anyone know run: BlueCove log redirected to log4j log4j:WARN No appenders could be found for logger (com.intel.bluetooth). log4j:WARN Please initialize the log4j system properly. BlueCove version 2.1.0 on bluesoleil java.io.IOException: Device not discovered BlueCove stack shutdown completed at com.intel.bluetooth.BluetoothStackBlueSoleil.connectionRfOpenClientConnection(BluetoothStackBlueSoleil.java:361) at com.intel.bluetooth.BluetoothRFCommClientConnection.<init>(BluetoothRFCommClientConnection.java:37) at com.intel.bluetooth.MicroeditionConnector.openImpl(MicroeditionConnector.java:379) at com.intel.bluetooth.MicroeditionConnector.open(MicroeditionConnector.java:162) at javax.microedition.io.Connector.open(Connector.java:83) at de.avetana.obexsolo.OBEXConnector.open(OBEXConnector.java:103) at OBEXTest.main(OBEXTest.java:23)

    Read the article

  • How do I set up DNS with nic.io to point to an AWS EC2 server?

    - by Chad Johnson
    I purchased a domain one week ago via nic.io. I have elected to provide my own DNS [because they provided no other option]. I'm trying to point my .io domain at my EC2 server instance. I've allocated an elastic IP and associated it with the instance. I can SSH into the instance and access point 80 via the IP address just fine. The IP is 54.235.201.241. nic.io support said the following: "You have selected to provide your own DNS and therefore if there is an issue with the set-up of the name servers you will need to contact your DNS provider." So, I created a Hosted Zone via Route 53 in AWS. This created NS and SOA records. I then set the Primary and Secondary servers at nic.io's domain admin page to the SOA record domains. Additionally, I set the optional servers to the NS domains. I did this two days ago, and I can't access the server via the domain. I ran a DNS check here...still not sure what I need to do: http://mydnscheck.com/?domain=chadjohnson.io&ns1=&ns2=&ns3=&ns4=&ns5=&ns6=. I have no idea what I'm supposed to do. Does anyone have any ideas?

    Read the article

  • Embed a Python persistance layer into a C++ application - good idea?

    - by Rickard
    say I'm about to write an application with a thin GUI layer, a really fat calculation layer (doing computationally heavy calibrations and other long-running stuff) and fairly simple persistance layer. I'm looking at building the GUI + calculation layer in C++ (using Qt for the gui parts). Now - would it be a crazy idea to build the persistance layer in Python, using sqlalchemy, and embed it into the C++ application, letting the layers interface with eachother through lightweigth data transfer objects (written in C++ but accessible from python)? (the other alternative I'm leaning towards would probably be to write the app in Python from the start, using the PyQt wrapper, and then calling into C++ for the computational tasks) Thanks, Rickard

    Read the article

  • How can I verify that javascript and images are being cached?

    - by BestPractices
    I want to verify that the images, css, and javascript files that are part of my page are being cached by my browser. I've used Fiddler and Google Page Speed and it's unclear whether either is giving me the information I need. Fiddler shows the HTTP 304 response for images, css, and javascript which should tell the browser to use the cached copy. Google Page Speed shows the 304 response but doesn't show a Transfer Size of Zero, instead it shows the full file size of the resource. Note also, I have seen Google Page Speed report a 200 response but then put the word (cache) next to the 200 (so Status is 200 (cache)), which doesnt make a lot of sense. Any other suggestions as to how I can verify whether the server is sending back images, css, javascript after they've been retrieved and cached by a previous page hit?

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • Upload 1GB files using chunking in PHP

    - by rjha94
    I have a web application that accepts file uploads of up to 4 MB. The server side script is PHP and web server is NGINX. Many users have requested to increase this limit drastically to allow upload of video etc. However there seems to be no easy solution for this problem with PHP. First, on the client side I am looking for something that would allow me to chunk files during transfer. SWFUpload does not seem to do that. I guess I can stream uploads using Java FX (http://blogs.sun.com/rakeshmenonp/entry/javafx_upload_file ) but I can not find any equivalent of request.getInputStream in PHP. Increasing browser client_post limits or php.ini upload or max_execution times is not really a solution for really large files (~ 1GB) because maybe the browser will time out and think of all those blobs stored in memory. is there any way to solve this problem using PHP on server side? I would appreciate your replies.

    Read the article

  • XSLT Escape Character not working

    - by liveek
    I am trying to use escape charaters in my text output, as i would like too surround the output in emailData tags. I am using <xsl:text>&#60;emailData&#62;</xsl:text> In the XSLT to esnure that this works however because i am using a tool called Cast Iron for some reason it is not converting the &#60; into < and just spits out &lt;emailData> You can see am image of it HERE that illustrates the output i am getting. My source code is this. How else could i wrap this in emailData tags? <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="text"/> <xsl:template match="header"> <xsl:text>&#60;emailData&#62;</xsl:text> <xsl:text>&#10;</xsl:text> <xsl:text>From: </xsl:text> <xsl:value-of select="from/text()"/> <xsl:text>&#10;</xsl:text> <xsl:text>To: </xsl:text> <xsl:value-of select="to/text()"/> <xsl:text>&#10;</xsl:text> <xsl:text>Subject: </xsl:text> <xsl:value-of select="subject/text()"/> <xsl:text>&#10;</xsl:text> <xsl:text>Content-Type: </xsl:text> <xsl:value-of select="contentType/text()"/> <xsl:text>&#10;</xsl:text> <xsl:text> boundary="</xsl:text> <xsl:value-of select="boundary/text()"/> <xsl:text>"</xsl:text> <xsl:text>&#10;</xsl:text> <xsl:text>MIME-Version: </xsl:text> <xsl:value-of select="mimeVersion/text()"/> </xsl:template> <xsl:template match="email"> <xsl:text>&#10;&#10;</xsl:text> <xsl:text>--</xsl:text> <xsl:value-of select="../header/boundary/text()"/> <xsl:text>&#10;</xsl:text> <xsl:text>Content-Type: </xsl:text> <xsl:value-of select="contentTypeBody/text()"/> <xsl:text> charset="us-ascii"</xsl:text> <xsl:text>&#10;</xsl:text> <xsl:text>Content-Transfer-Encoding: </xsl:text> <xsl:value-of select="contentTransfer/text()"/> <xsl:text>&#10;&#10;</xsl:text> <xsl:value-of select="body/text()"/> </xsl:template> <xsl:template match="Attachment"> <xsl:for-each select="Attachments"> <xsl:text>&#0010;&#0010;</xsl:text> <xsl:value-of select="../../header/boundary/text()"/> <xsl:text>&#10;</xsl:text> <xsl:text>Content-Type: </xsl:text> <xsl:value-of select="attachmentContentType/text()"/> <xsl:text> name="</xsl:text> <xsl:value-of select="attachmentDescription/text()"/> <xsl:text>"</xsl:text> <xsl:text>&#10;</xsl:text> <xsl:text>Content-Description: </xsl:text> <xsl:value-of select="attachmentDescription/text()"/> <xsl:text>&#10;</xsl:text> <xsl:text>Content-Disposition: attachment; filename="</xsl:text> <xsl:value-of select="atachementDisposition/text()"/> <xsl:text>"</xsl:text> <xsl:text>&#10;</xsl:text> <xsl:text>Content-Transfer-Encoding: </xsl:text> <xsl:value-of select="attachmentContentTransfer/text()"/> <xsl:text>&#10;&#10;</xsl:text> <xsl:value-of select="attachementBody/text()"/> <xsl:text>&#10;</xsl:text> <xsl:text>&#60;/emailData&#62;</xsl:text> </xsl:for-each> </xsl:template> <xsl:template match="text()"/> </xsl:stylesheet>

    Read the article

  • Large number array compression

    - by gatapia
    Hi All, I've got a javascript application that sends a large amount of numerical data down the wire. This data is then stored in a database. I am having size issues (too much bandwidth, database getting too big). I am now ready to sacrifice some performance for compression. I was thinking of implementing a base 62 number.toString(62) and parseInt(compressed, 62). This would certainly reduce the size of the data but before I go ahead and do this I thought I would put it to the folks here as I know there must be some outside the box solution I have not considered. The basic specs are: - Compress large number arrays into strings for JSONP transfer (So I think UTF is out) - Be relatively fast, look I'm not expecting same performance as I have now but I also don't want gzip compression either. Any ideas would be greatly appreciated. Thanks Guido Tapia

    Read the article

  • Keep Window Inactive In Appearance Even When Activated

    - by Zach Johnson
    Is there a way to keep a window inactive looking, even if it contains focus? I have two forms (A and B). After the user interacts with A, I transfer focus back to B. The result of the focus transfers (the user clicking on the A, then focus being transferred back to B) is that form A blinks from active to inactive. This looks ugly (especially on Vista where A momentarily gets a bigger shadow). How can I make A stay inactive looking so this blinking will not happen?

    Read the article

  • vmware vcenter 5.1 installation with FQDN error

    - by CSG
    I'm trying to install vCenter 5.1 on a windows 2012 dedicated (with SQL express standalone) During the installation of the Single Sign On module i've a warning "the fully qualified domain name cannot be resolved with nslookup. if you continue the installation some features might not work correctly. for detailed requiments see the installation and setup guide" The only indication that i've found are about the reverse zone dns resolution.. and this works! i've verified that the dns works properly with nslookup C:\Users\admin>nslookup srv6.mydomain.local Server: srv2.mydomain.local Address: 172.25.4.22 Nome: srv6.mydomain.local Address: 172.25.1.26 C:\Users\admin>nslookup 172.25.1.26 Server: srv2.mydomain.local Address: 172.25.4.22 Nome: srv6.mydomain.local Address: 172.25.1.26 (all ip are right: I've the vCenter=srv6 and DC+DNS=srv2 on different vlan) i've tryed to force the resolution of the ip changing the [..]\drivers\etc\hosts file i've disabled the IPv6 support i've used all combination with domain prefixes (explicit, by dhcp, undefined..) i've disabled all antivirus/firewall (kaspersky end point 10) is this a bug of vcenter 5.1.0-1065152 ? have you got any suggestions for me?

    Read the article

  • DNS server not working?

    - by Behrooz A
    I just set up a DNS Server on my windows 7, called SimpleDNS I added a zone for example sag.com and defined www.sag.com and sag.com to 192.168.1.2 (my Network IP Address) . the problem is when I try to ping sag.com the SimpleDNS logs says that it answered the request with 192.168.1.2 , but the ping doesn't resolve anything . SimpleDNS logs: > 14:00:43 Request from 192.168.1.2 for A-record for www.sag.com > 14:00:43 Sending reply to 192.168.1.2 about A-record for > www.sag.com: 14:00:43 -> Answer: A-record for www.sag.com = > 192.168.1.2 14:00:43 -> Authority: NS-record for www.sag.com = mehr-pc nslookup : > C:\Users\Mehr\Desktop>nslookup www.sag.com DNS request timed out. > timeout was 2 seconds. Server: UnKnown Address: 192.168.1.1 > > DNS request timed out. > timeout was 2 seconds. DNS request timed out. > timeout was 2 seconds. DNS request timed out. > timeout was 2 seconds. DNS request timed out. > timeout was 2 seconds. > *** Request to UnKnown timed-out the DNS server IP is 192.168.1.2 , and Access point address : 192.168.1.1 what should I do?

    Read the article

  • What is a good pattern for binding a collection of objects coming from WCF, in Silverlight?

    - by Krishna
    Hi there, I've got a question about a Silverlight WCF Databinding pattern: There are many examples about how to bind data using {Binding} expressions in XAML, how to make async calls to a WCF service, set the DataContext property of a element in the UI, how to use ObservableCollections and INotifyPropertyChanged, INotifyCollectionChanged and so on. Background: I'm using the MVVM pattern, and have a Silverlight ItemsControl, whose ItemsSource is set to an ObservableCollection property on my ViewModel object. My view is of course the XAML which has the {Binding}. Say the model object is called 'Metric'. My ViewModel periodically makes calls to a WCF service that returns ObservableCollection. MetricInfo is the data transfer object (DTO). My question is two-fold: Is there any way to avoid copying each property of MetricInfo to the model class - Metric? When the WCF calls completes, is there any way to make sure I sync the items which are in both my local ObservableCollection and the result of the WCF call - without having to first clear out all the items in the local collection and then add all the ones from the WCF call result? thanks, Krishna

    Read the article

  • Which internet scenario would be better?

    - by JL
    I currently have an 8mbps (down) / 512kbps (up) telephone ADSL solution. I must say the reliability is excellent, and up until now its been the fastest connection I could get because I don't live in a cable zone. The real speed of my connection is around 7mbps, but sometimes I manage to get the full 8mbps. I use my connection for work, so it needs to be at least 99% reliable. Recently I was told by a guy who lives up the road that he has a wireless connection with an external antenna and his speeds are 20mbps / 512kbps - he's also paying about 1/2 of what I pay for my wired telephone connection. My question is, is wireless internet good enough for a power user who uses his connection for work 8 hours a day, including VPNing into servers remotely. Besides this I also enjoy playing the odd network game, not a WoW freak, but sometimes I do pick up the odd MMORPG and at times do indulge in some semi heavy gaming sprees. Will this wireless latency drive me crazy and seem slow in comparison? Will it be reliable enough, I also live in an area that snows heavily in winter. I guess its a question of - should I go wireless or not. I've only had 1 wireless connection before and that was years ago using iBurst technology and I remember it was terrible for VPN, but I guess the technology might have been improved since then? What do you guys think?

    Read the article

  • BlueCove failing to associate with Bluetooth device in Java.

    - by user243680
    I am getting the following error when i try to use a blue tooth dongle to transfer a video file from pc to mobile phone. does anyone know run: BlueCove log redirected to log4j log4j:WARN No appenders could be found for logger (com.intel.bluetooth). log4j:WARN Please initialize the log4j system properly. BlueCove version 2.1.0 on bluesoleil java.io.IOException: Device not discovered BlueCove stack shutdown completed at com.intel.bluetooth.BluetoothStackBlueSoleil.connectionRfOpenClientConnection(BluetoothStackBlueSoleil.java:361) at com.intel.bluetooth.BluetoothRFCommClientConnection.<init>(BluetoothRFCommClientConnection.java:37) at com.intel.bluetooth.MicroeditionConnector.openImpl(MicroeditionConnector.java:379) at com.intel.bluetooth.MicroeditionConnector.open(MicroeditionConnector.java:162) at javax.microedition.io.Connector.open(Connector.java:83) at de.avetana.obexsolo.OBEXConnector.open(OBEXConnector.java:103) at OBEXTest.main(OBEXTest.java:23)

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >