Search Results

Search found 34305 results on 1373 pages for 'self referencing table'.

Page 514/1373 | < Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >

  • How to secure both root domain and wildcard subdomains with one SSL cert?

    - by Question Overflow
    I am trying to generate a self-signed SSL certificate to secure both example.com and *.example.com. Looking at the answers to this and this questions, there seems to be an equal number of people agreeing and disagreeing whether this could be done. However, the website from a certification authority seems to suggest that it could be done. Currently, these are the changes added to my openssl configuration file: [req] req_extensions = v3_req [req_distinguished_name] commonName = example.com [v3_req] subjectAltName = @alt_names [alt_names] DNS.1 = example.com DNS.2 = *.example.com I tried the above configuration and generated a certificate. When navigating to https://example.com, it produces the usual warning that the cert is "self-signed". After acceptance, I navigate to https://abc.example.com and an additional warning is produced, saying that the certificate is only valid for example.com. The certificate details only listed example.com in the certificate hierarchy with no signs of any wildcard subdomain being present. I am not sure whether this is due to a misconfiguration or that the common name should have a wildcard or that this could not be done.

    Read the article

  • What is the "in-the-wire" size of a ethernet frame? 1518 or 1542?

    - by chrisapotek
    According to the table here, it says that MTU = 1500 bytes and that the payload part is 1500 - 42 bytes or 1458 bytes (<- this is actually wrong!). Now on top of that you have to add IPv4 and UDP headers, which are 28 bytes (20 IP + 8 UDP). That leaves my maximum possible application message to as 1430 bytes! But by looking for this number in the Internet I see 1472 instead. Am I doing this calculation wrong here? All I want to find out is the maximum application message I can send over the wire without the risk of fragmentation. It is definitely not 1500 because that includes the frame headers. Can someone help? The confusion is the the PAYLOAD can actually be as large as 1500 bytes and that's the MTU. So now what is the size in-the-wire for a payload of 1500? From that table it can be as big as 1542 bytes. So the maximum app messages I can send is 1472 (1500 - 20 (ip) - 8 (udp)) for a maximum in the wire size of 1542. It amazes me how things can get so complicated when they are actually simple. And I have not clue how someone came up with the number 1518 if the table says 1542.

    Read the article

  • What's throttling the database?

    - by Troels Arvin
    Hardware: Intel x86_64 with 192GB of RAM. OS: CentOS 5.4 x86_64. DBMS: DB2 v. 9.7.1 64 bit. During certain special workloads (e.g. parallel REORGs/RUNSTATs), I've seen the server transporting 450MB/s with 25000IO/s (yes, there is probably some storage system caching happening here) while all CPU cores were happily working in an even mix of usermode/wait. And disk benchmark tools can also bring some very satisfying bandwith and IO/s numbers to the table. On the other hand, we also have another scenario: A single rather complex query with at least one large table scan. db2's "list applications" reports that the query is Executing (not locked). IO: At most 10MB/s, 500 IO/s; CPU: two cores in 99.9% wait state, all other cores 100% idle. The tables which the query reads from have been altered to have LOCKSIZE=TABLE, so I would think that lock list work is zero. What's going on in such a situation? What tools/snapshots/... can I use to gain better insight in such a case?

    Read the article

  • My ubuntu server is losing connectivity, require assistance troubleshooting

    - by Coderama
    I have two server monitoring tools that check the uptime of my Ubuntu 10.04.1 LTS server. Over the last month both of these tools have reported the following outages: Jan - 15mins: self recovery Jan - 5 mins: self recovery Feb - 15mins: I managed to be online when this happened and logged into my control panel to find that my server was shutdown. I manually restarted it only because I was online. I wonder if it would have restarted? I have contacted my VPS provider (webbynode) who could not see any any problems with their VPS infrastructure for the February outage. I have also checked the following log files to see if there are any anomalies that could be the cause of the outages; I am unable to see anything related to the server shutdown (Feb) in these logs: /var/log/auth.log /var/log/apache2/error.log /var/log/syslog /var/log/kern.log /var/log/boot /var/log/messages Could anyone suggest any other things I should look out for, such as additional system logs? I just added bootlogging through /etc/default/bootlogd

    Read the article

  • How to route broadcast packets from machine with two network interfaces on same subnet

    - by Syam
    I run RHEL 5 and have two NICs on one machine connected to the same subnet: eth0 192.168.100.10 eth1 192.168.100.11 My application needs to receive and transmit UDP packets (both unicast & broadcast) via these interfaces. I've found the way to handle the ARP problem and I've added routes to handle the routing problem: ip rule add from 192.168.100.10 lookup 10 ip route add table 10 default src 192.168.100.10 dev eth0 (and similarly, table 11 for eth1) The problem is that only unicast packets gets routed properly. Broadcast packets always go out through eth0. I tried removing the rule for 192.168.100.0 & 192.168.100.255 from table 255 and adding them to my tables. But then I see ARP requests being given out for packets to 192.168.100.255 (obviously, no nodes respond and nobody gets any data). Due to several techno-political issues, I'm stuck with this configuration and can't change subnets or try something different. I've tried SO_BINDTODEVICE and it works, but I'd prefer a solution that doesn't need my application to run as root. Is there a way to get this working? Any help is highly appreciated.

    Read the article

  • Server slowdown

    - by Clinton Bosch
    I have a GWT application running on Tomcat on a cloud linux(Ubuntu) server, recently I released a new version of the application and suddenly my server response times have gone from 500ms average to 15s average. I have run every monitoring tool I know. iostat says my disks are 0.03% utilised mysqltuner.pl says I am OK other see below top says my processor is 99% idle and load average: 0.20, 0.31, 0.33 memory usage is 50% (-/+ buffers/cache: 3997 3974) mysqltuner output [OK] Logged in using credentials from debian maintenance account. -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.63-0ubuntu0.10.04.1-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 370M (Tables: 52) [--] Data in InnoDB tables: 697M (Tables: 1749) [!!] Total fragmented tables: 1754 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 19h 25m 41s (1M q [28.122 qps], 1K conn, TX: 2B, RX: 1B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 1.0G global + 2.7M per thread (500 max threads) [OK] Maximum possible memory usage: 2.4G (30% of installed RAM) [OK] Slow queries: 0% (1/1M) [OK] Highest usage of available connections: 34% (173/500) [OK] Key buffer size / total MyISAM indexes: 16.0M/279.0K [OK] Key buffer hit rate: 99.9% (50K cached / 40 reads) [OK] Query cache efficiency: 61.4% (844K cached / 1M selects) [!!] Query cache prunes per day: 553779 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 34K sorts) [OK] Temporary tables created on disk: 4% (4K on disk / 102K total) [OK] Thread cache hit rate: 84% (185 created / 1K connections) [!!] Table cache hit rate: 0% (256 open / 27K opened) [OK] Open file limit used: 0% (20/2K) [OK] Table locks acquired immediately: 100% (692K immediate / 692K locks) [OK] InnoDB data size / buffer pool: 697.2M/1.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) table_cache (> 256)

    Read the article

  • Organize code in Chef: libraries, classes and resources

    - by ColOfAbRiX
    I am new to both Chef and Ruby and I am implementing some scripts to learn them. Now I am facing the problem of how to organize my code: I have created a class in the library directory and I have used a custom namespace to maintain order. This is a simplified example of my file: # ~/chef-repo/cookbooks/mytest/libraries/MyTools.rb module Chef::Recipe::EP class MyTools def self.print_something( text ) puts "This is my text: #{text}" end def self.copy_file( dir, file ) cookbook_file "#{dir}/#{file}" do source "#{dir}/#{file}" end end end end From my recipe I call both methods: # ~/chef-repo/cookbooks/mytest/recipes/default.rb EP::MyTools.print_something "Hello World!" EP::MyTools.copy_file "/etc", "passwd" print_something works fine, but with copy_file I get this error: undefined method `cookbook_file' for Chef::Recipe::EP::FileTools:Class It's clear to me that I don't know how to create libraries in Chef or I don't know some basic assumptions. Can anyone help me, please? I am looking for a solution of this problem (organize my code, libraries, use resources in classes) or, better, a good Chef documentation as I find the documentation very deficient in clarity and disorganized so that research through it is a pain.

    Read the article

  • How to get just value from database query in Excel?

    - by Corin
    I'm creating a spreadsheet as a collection point of information from a number of MS Access databases. I will run a query on each database to get a count of records in a particular table. Each database has the same structure but different content as they are used in different situations. So the query returns a single value, rec_count. I've figured out how to create that query, save it and then use it as the data source. So far so good. The problem is that Excel treats the query results as a table. So instead of getting just the single value the query returns, I also get the field name. Thus the result takes up two cells instead of one. When linking in the data source, I only see Table, PivotTable Report and PivotChart as options for viewing the data. I don't want any of those. I just want the single value without any formatting, column headers, etc. Is there a way to do this is Excel 2007?

    Read the article

  • New Secure Website with Apache Reverse Proxy

    - by jtnire
    I wish to set up a new website that will be accessed by users using HTTPS. I think it is good practise to put the "real" web server in a seperate subnet, and then install an Apache Reverse Proxy in a DMZ. My question is, where should I put the SSL cert(s)? Should I a) Use a self-signed cert on the "real" web server, and a proper cert on the reverse proxy? b) Use 2 real certs on both the "real" web server and the reverse proxy? c) Don't use any cert on the "real" web server, and use a proper cert on the reverse proxy? I'd like to use a) or c), if possible. I also don't want anyone's browser complaining of a self-signed cert. Thanks

    Read the article

  • Cross-reference of computers having virtualization technology [closed]

    - by msorens
    When considering obtaining a new computer one of my prerequisites is the ability to load Windows 8 in a virtual machine (using VirtualBox). A prerequisite for that is that the host computer have virtualization technology. I located an Intel cross reference of chips having virtualization technology but I am trying to find a "higher level" cross reference between computer models and virtualization technology availability, skipping the extra step of having to first look up what CPU chip is in a machine, then cross-referencing that on Intel's list.

    Read the article

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • Possible for linux bridge to intercept traffic?

    - by A G
    I have a linux machine setup as a bridge between a client and a server; brctl addbr0 brctl addif br0 eth1 brctl addif br0 eth2 ifconfig eth1 0.0.0.0 ifconfig eth2 0.0.0.0 ip link set br0 up I also have an application listening on port 8080 of this machine. Is it possible to have traffic destined for port 80 to be passed to my application? I have done some research and it looks like it could be done using ebtables and iptables. Here is the rest of my setup: //set the ebtables to pass this traffic up to ip for processing; DROP on the broute table should do this ebtables -t broute -A BROUTING -p ipv4 --ip-proto tcp --ip-dport 80 -j redirect --redirect-target DROP //set iptables to forward this traffic to my app listening on port 8080 iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 8080 --tproxy-mark 1/1 iptables -t mangle -A PREROUTING -p tcp -j MARK --set-mark 1/1 //once the flows are marked, have them delivered locally via loopback interface ip rule add fwmark 1/1 table 1 ip route add local 0.0.0.0/0 dev lo table 1 //enable ip packet forwarding echo 1 > /proc/sys/net/ipv4/ip_forward However nothing is coming into my application. Am I missing anything? My understanding is that the target DROP on the broute BROUTING chain will push it up to be processed by iptables. Secondly, are there any other alternatives I should investigate? Edit: IPtables gets it at nat PREROUTING, but it looks like it drops after that; the INPUT chain (in either mangle or filter) doesn't see the packet.

    Read the article

  • mysql - moving to a lower performance server, how small can I go?

    - by pedalpete
    I've been running a site for a few years now which really isn't growing in traffic, and I want to save some money on hosting, but keep it going for the loyal users of the site and api. The database has one a nearly 4 million row table, and on a 4gb dual xeon 5320 server. When I check server stats on this server with ps -aux, i get returns of mysql running at about 11% capacity, so no serious load. The main query against mysql runs in about 0.45 seconds. I popped over to linode.com to see what kind of performance I could get out of one of their tiny boxes, and their 360mb ram XEN vps returns the same query in 20 seconds. Clearly not good enough. I've looked at the mysql variables, and they are both very similar (I've included the show variables output below, if anybody is interested). Is there a good way to decide on what size server is needed based on what I'm coming from? Is it RAM that is likely making the difference with the large table size? Is there a way for me to figure out how much ram would be ideal?? Here's the output of the show variables (though I'm not sure it is important). +---------------------------------+------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/ | | bdb_cache_size | 8384512 | | bdb_home | /var/lib/mysql/ | | bdb_log_buffer_size | 262144 | | bdb_logdir | | | bdb_max_lock | 10000 | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + - For some reason, that table formats properly in the preview, but apparently not when viewing the question. Hopefully it isn't needed anyway.

    Read the article

  • EMERGENCY! Update Statement for critical mysql production database now running for 18 hours, need help.

    - by Tim
    We have a table with 500 million rows. Unfortunately, one of the columns was int(11), which is a signed int, and it was an incrementing value that just rolled over the 2.1 billion magic number. This immediately caused downtime for about 10.000 users. We discussed many solutions, and decided that we could just roll back this value safely, by say, a billion. But we had to roll it back for every row. Here is what we did: update Table1 Set MessageId = case when MessageId < 1073741824 then 0 else MessageId - 1073741824 end; I tested this on a table with 10 million rows and it took 11 minutes. So I assumed the larger table would take 550 minutes, or 9 hours. This was going to be our biggest downtime in 3 years. (We're a startup). It's now going on 18 hours. What should we do? Please don't say what we should have done. I think we should have updated a few million rows at a time. Is there a way we can see progress? Could Mysql have hung? Using mysql 5.0.22. Thanks!

    Read the article

  • How should I configure backup of my server?

    - by ed209
    I have just rented a dedicated server. If it helps this is the config I have: CPU1 Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz (Cores 8) RAM 15975 MB Disk /dev/sda doesn't contain a valid partition table (=> /dev/sda doesn't) Disk /dev/sdc doesn't contain a valid partition table (=> /dev/sdc doesn't) Disk /dev/sdb doesn't contain a valid partition table (=> /dev/sdb doesn't) Disk /dev/sda: 120.0 GB (=> 114 GIB) Disk /dev/sdc: 3000.6 GB (=> 2861 GIB) Disk /dev/sdb: 3000.6 GB (=> 2861 GIB) /dev/sda is a 120GB SSD. This is where I have Ubuntu/lamp installed. It's the drive that will run my site. With the account I got two other drives of 3000GB each which I really don't need but they came with the account. I figured I could use these to back up my main 120gb drive. So a couple of things I wondered were: Should I use these for backups? How should I back up. The data I want to back up is a user uploads directory full of images and the database. Everything else is either in a code repo or backed up some other way. For example, it would be nice to know there is a disk image of the 120gb drive somewhere that I can copy over should there be any problems but equally I don't mind doing a fresh install of all the software and copying over just the images and database dump. Thanks for your advice! (also, happy to not use the two other drives and backup elsewhere if it's more sensible)

    Read the article

  • mysqldump isn't able to export a specific database, phpMyAdmin crashes

    - by Devils Child
    I'm experiencing problems with a database on my server (Note: All other databases work fine). Once I try to export it with mysqldump I get this error: # mysqldump -u root -pXXXXXXXXX databasename > /root/databasename.sql mysqldump: Couldn't execute 'show table status like 'apps'': Lost connection to MySQL server during query (2013) Also, phpMyAdmin throws an error when selecting this database and immediately logs out. However, the web site which uses this database works fine. I can also execute SELECT statements on the table named "apps" from the MySQL shell. I tried restarting the MySQL daemon as well as REPAIR DATABASE and REPAIR TABLE but the problem still persists. I had this problem before, then it disappeared somehow without me doing anything to resolve the issue. Now, the problem is back and I'm simply unable to create a backup of this database. Used software Debian 6.0.7 x64 MySQL 5.1.66-0 MySQL Version: mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+-------------------+ | Variable_name | Value | +-------------------------+-------------------+ | protocol_version | 10 | | version | 5.1.66-0+squeeze1 | | version_comment | (Debian) | | version_compile_machine | x86_64 | | version_compile_os | debian-linux-gnu | +-------------------------+-------------------+

    Read the article

  • Python cannot go over internet network

    - by user1642826
    I am currently trying to work with python networking and I have reached a bit of a road block. I am not able to network with any computer but localhost, which is kind-of useless with what networking is concerned. I have tried on my local network, from one computer to another, and I have tried over the internet, both fail. The only time I can make it work is if (when running on the server's computer) it's ip is set as 'localhost' or '192.168.2.129' (computers ip). I have spent hours going over opening ports with my isp and have gotten nowhere, so I decided to try this forum. I have my windows firewall down and I have included some pictures of important screen shots. I have no idea what the problem is and this has spanned almost a year of calls to my isp. The computer, modem, and router have all been replaced in that time. Screen shots: import socket import threading import socketserver class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler): def handle(self): data = self.request.recv(1024) cur_thread = threading.current_thread() response = "{}: {}".format(cur_thread.name, data) self.request.sendall(b'worked') class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer): pass def client(ip, port, message): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((ip, port)) try: sock.sendall(message) response = sock.recv(1024) print("Received: {}".format(response)) finally: sock.close() if __name__ == "__main__": # Port 0 means to select an arbitrary unused port HOST, PORT = "192.168.2.129", 9000 server = ThreadedTCPServer((HOST, PORT), ThreadedTCPRequestHandler) ip, port = server.server_address # Start a thread with the server -- that thread will then start one # more thread for each request server_thread = threading.Thread(target=server.serve_forever) # Exit the server thread when the main thread terminates server_thread.daemon = True server_thread.start() print("Server loop running in thread:", server_thread.name) ip = '12.34.56.789' print(ip, port) client(ip, port, b'Hello World 1') client(ip, port, b'Hello World 2') client(ip, port, b'Hello World 3') server.shutdown() I do not know where the error is occurring. I get this error: Traceback (most recent call last): File "C:\Users\Dr.Frev\Desktop\serverTest.py", line 43, in <module> client(ip, port, b'Hello World 1') File "C:\Users\Dr.Frev\Desktop\serverTest.py", line 18, in client sock.connect((ip, port)) socket.error: [Errno 10061] No connection could be made because the target machine actively refused it Any help will be greatly appreciated. *if this isn't a proper forum for this, could someone direct me to a more appropriate one.

    Read the article

  • How do I disable animations in Preview in Lion?

    - by c00kiemonster
    For example, open up a pdf and set view to two pages and hit arrow key up/down. Is there a way to get rid of the animation? Preview feels really snappy and fast, and to have animations like that just spoils all the fun. Might seem like a proper first world problem, especially if you're just a casual user of pdfs. But I frequently have 10+ pdfs open for referencing purposes (that is, a lot of flicking back and fourth), and the animation just drives me nuts.

    Read the article

  • HAProxy appsession vs cookie precedence

    - by user1139473
    I am trying to find the best solution for balancing and keeping persistence on our application behind HAProxy. Here is our basic configuration: https://gist.github.com/endzyme/1804046b23c37beba520 After playing around with taking members down and up and also reloading the haproxy (with -sf) I have noticed that appsession isn't 100% effective, it would appear that sometimes it doesn't always 'request-learn'. I also tried to add a cookie JSESSION prefix to balance in case request-learn didn't take. Unfortunately it would present scenarios where the prefix would list svr2 but it was balanced to a different server. I am assuming it's because the appsession table takes first then sticks on that before using the cookie parameter. I have not tested with using cookie as an inserted option (not prefix on existing cookie) but I am thinking it would yield similar results. My question is: Which one is checked first, appsession or cookie, and is it an immediate catch after it reads the first one, or a fall through? Also as a follow up - is it not recommended to use both in the same backend? Cookie as I understand takes less memory resources, is agnostic to reloads and has way better reliability of persistence. Appsession I assume takes less cpu resource, since it's reading not writing. (Bonus Question: is there a way to inspect appsession/cookie table map? socket show table doesn't show anything except stick-tables) Many thanks in advance, -Nick

    Read the article

  • Comparing 128MB GeForce 8600GT and 512MB Radeon X1650

    - by Synetech inc.
    Hi, I'm trying to determine which is the better of these two video cards: 128MB Nvidia GeForce 8600GT card while the other has a 512MB ATI Radeon X1650 card. Both cards are the upper-level mid-range versions of their respective series. On the one hand, the ATI has substantially more VRAM, but the Nvidia supports D3D 10 and SM4.0 as opposed to D3D 9.0c/SM3.0 that the ATI supports. Also, I have always heard better things about Nvidia cards compared to ATI cards. I'm trying to find some advice on which one is better, but I can't find any actual comparisons or anything for these specific cards (the comparisons I can find are only similar ones like the X1650 Pro or 8600GT PCI-E), so I figure that what I need to know is whether the extra VRAM is that important. Looking at the ATI table and the Nvidia table seems to indicate that the Nvidia is better, but then again, the Nvidia table also says that the GeForce 8600GT is a PCI-E card with at least 256MB even though the card in question is an AGP with 128MB. (:-?) (It looks like the ATI card is not supported in Windows 7 while the Nvidia card is, which I suppose is also a factor, though not quite as immediately relevant as performance.) Any ideas? Thanks a lot.

    Read the article

  • Discover intended Foreign Keys from JOINS in scripts

    - by Jason
    I'm inheriting a database that has 400 tables and only 150 foreign key constraints registered. Knowing what I do about the application and looking at the table columns, it's easy to say that there ought to be a lot more. I'm afraid that the current application software will break if I started adding the missing FKs because the developers have probably come to rely on this "freedom", but step one in fixing the problem is to come up with the list of missing FKs so we can evaluate them as a team. To make matters worse, the referencing columns don't share a naming convention. The relationships ARE coded informally into the hundreds of ad-hoc queries and stored procedures, so my hope is to parse these files programmatically looking for JOINS between actual tables (but not table variables, etc). Challenges I foresee in this approach are: newlines, optional aliases and table hints, alias resolution. Any better ideas? (Besides quitting) Are there any pre-built tools that can solve this? I don't think regex can handle this. Do you disagree? SQL Parsers? I tried using Microsoft.SqlServer.Management.SqlParser.Parser but all that is exposed is the lexer - can't get an AST out of it - all that stuff is internal.

    Read the article

  • How to change the style of a source reference in Word ?

    - by ldigas
    I have a number of references in Word 2007; there is several way of referencing them and the one I find the most fitting is purely numeric, e.g. (3) for reference number 3 in the list. But since I reference equations by round parenthesis, I'd like to change the literature references to square brackets. How can I do that ?

    Read the article

  • Service-Oriented Architecture and Web Services

    Service oriented architecture is an architectural model for developing distributed systems across a network or the Internet. The main goal of this model is to create a collection of sub-systems to function as one unified system. This approach allows applications to work within the context of a client server relationship much like a web browser would interact with a web server. In this relationship a client application can request an action to be performed on a server application and are returned to the requesting client. It is important to note that primary implementation of service oriented architecture is through the use of web services. Web services are exposed components of a remote application over a network. Typically web services communicate over the HTTP and HTTPS protocols which are also the standard protocol for accessing web pages on the Internet.  These exposed components are self-contained and are self-describing.  Due to web services independence, they can be called by any application as long as it can be accessed via the network.  Web services allow for a lot of flexibility when connecting two distinct systems because the service works independently from the client. In this case a web services built with Java in a UNIX environment not will have problems handling request from a C# application in a windows environment. This is because these systems are communicating over an open protocol allowed by both environments. Additionally web services can be found by using UDDI. References: Colan, M. (2004). Service-Oriented Architecture expands the vision of web services, Part 1. Retrieved on August 21, 2011 from http://www.ibm.com/developerworks/library/ws-soaintro/index.html W3Schools.com. (2011). Web Services Introduction - What is Web Services. Retrieved on August 21, 2011 from http://www.w3schools.com/webservices/ws_intro.asp

    Read the article

< Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >