Search Results

Search found 16059 results on 643 pages for 'global temp tables'.

Page 325/643 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • Recursive Query Help

    - by Josh
    I have two tables in my database schema that represent an entity having a many-to-many relationship with itself. Role --------------------- +RoleID +Name RoleHasChildRole --------------------- +ParentRoleID +ChildRoleID Essentially, I need to to be able to write a query such that: Given a set of roles, return the unique set of all related roles recursively. This is an MSSQL 2008 Database.

    Read the article

  • How to open multiple viewer windows in Toad?

    - by RemotecUk
    Im sure this is pretty simple but I just cant seem to find the option. My Toad MySql install only seems to allow me to have one "Viewer" window open and hence its impossible to view multiple tables - or have a window open for each table Im working with. Does anyone know how to change this?

    Read the article

  • Nginx + Haproxy + Thin + Rails - 503 Service Unavailable -

    - by Luca G. Soave
    I don't know how troubleshoot this. I get "503 Service Unavailable" http error for all "nginx upstreams" proxy passing calls to haproxy fast_thin and slow_thin ( server 127.0.0.1:3100 and server 127.0.0.1:3200 ), which loadbalance on 6 Thin servers ( 127.0.0.1:3000 .. 3005 ). Static files like /blog are currently fine. The falldown is: nginx on port 80 - haproxy on 3100 and 3200 - thin on 3000 .. 3005 and then Rails. Here it is /etc/nginx/nginx.conf : user nginx; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; include /etc/nginx/conf.d/*.conf; } then /etc/nginx/conf.d/default.conf upstream fast_thin { server 127.0.0.1:3100; } upstream slow_thin { server 127.0.0.1:3200; } server { listen 80; server_name www.gitwatcher.com; rewrite ^/(.*) http://gitwatcher.com/$1 permanent; } server { listen 80; server_name gitwatcher.com; access_log /var/www/gitwatcher/log/access.log; error_log /var/www/gitwatcher/log/error.log; root /var/www/gitwatcher/public; # index index.html; location /about { proxy_pass http://fast_thin; break; } location /trends { proxy_pass http://slow_thin; break; } location /categories { proxy_pass http://slow_thin; break; } location /signout { proxy_pass http://slow_thin; break; } location /auth/github { proxy_pass http://slow_thin; break; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f $request_filename.html) { rewrite (.*) $1.html break; } if (!-f $request_filename) { proxy_pass http://slow_thin; break; } } } then haproxy config file /etc/haproxy/haproxy.cfg : global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet nbproc 1 # number of processing cores defaults log global retries 3 maxconn 2000 contimeout 5000 mode http clitimeout 60000 # maximum inactivity time on the client side srvtimeout 30000 # maximum inactivity time on the server side timeout connect 4000 # maximum time to wait for a connection attempt to a server to succeed option httplog option dontlognull option redispatch option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode) option abortonclose # enable early dropping of aborted requests from pending queue option httpchk # enable HTTP protocol to check on servers health option forwardfor # enable insert of X-Forwarded-For headers balance roundrobin # each server is used in turns, according to assigned weight stats enable # enable web-stats at /haproxy?stats stats auth haproxy:pr0xystats # force HTTP Auth to view stats stats refresh 5s # refresh rate of stats page listen rails_proxy 127.0.0.1:3100 # - equal weights on all servers # - maxconn will queue requests at HAProxy if limit is reached # - minconn dynamically scales the connection concurrency (bound my maxconn) depending on size of HAProxy queue # - check health every 20000 microseconds server web1 127.0.0.1:3000 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3001 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3002 weight 1 minconn 3 maxconn 6 check inter 20000 listen slow_proxy 127.0.0.1:3200 # cluster for slow requests, lower the queues, check less frequently server slow1 127.0.0.1:3003 weight 1 minconn 1 maxconn 3 check inter 40000 server slow2 127.0.0.1:3004 weight 1 minconn 1 maxconn 3 check inter 40000 server slow3 127.0.0.1:3005 weight 1 minconn 1 maxconn 3 check inter 40000 and the Thin config file /etc/thin/gitwatcher.yml : --- chdir: /var/www/gitwatcher environment: production address: 0.0.0.0 port: 3000 timeout: 30 log: log/thin.log pid: tmp/pids/thin.pid max_conns: 1024 max_persistent_conns: 100 require: [] wait: 30 servers: 6 daemonize: true if I look into open listen ports, I got the following : root@fullness:/var/www/gitwatcher# lsof | grep TCP | egrep "nginx|haproxy|thin" nginx 834 root 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 835 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 837 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) haproxy 1908 haproxy 4u IPv4 11699 0t0 TCP localhost:3100 (LISTEN) haproxy 1908 haproxy 6u IPv4 11701 0t0 TCP localhost:3200 (LISTEN) root@fullness:/var/www/gitwatcher# iptables -L get me the following : Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:22222 ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT all -- anywhere anywhere DROP all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Any help ?

    Read the article

  • Keep context-configuration when redeploying via Cargo

    - by Björn Pollex
    I am using Tomcat 7 to host a web-application that requires a JNDI datasource to be set up. Because this resource is specific to this application, I would like to configure it inside the application-specific context-descriptor in $CATALINA_BASE/conf/[enginename]/[hostname]/. I am also using Cargo from Maven to deploy the web-application to Tomcat. The problem is that when I do a redeploy with Cargo, it first undeploys the application, before deploying it again. When undeploying it, Tomcat deletes the context-descriptor of the application, so it won't work after redeploying. I could of course package the context-descriptor with the application, but I would like to keep any such container-specifics out of the .war. Another alternative is to configure the datasource in the global context-descriptor, but that too seems wrong, because the datasource is supposed to be exclusive to my application. Is my approach fundamentally wrong? What is the best practice here? Is there any way to prevent Tomcat from deleting the descriptor when undeploying?

    Read the article

  • Problem with javascript in firefox when moving google ads

    - by thomas woelfer
    hi. i have a website with google ads on it. i would like to make it load faster. thus i moved all the google scripts to the bottom of the page. i also have a placeholder at the location where the ad(s) should be displayed and other divs that (initially) get the ads. finally (in window.onload) i'm moving the ads that have just be fetched from google to their target locations. (a simple example page is here: http://www.nickles.de/temp/ads.html ) this works in ie, but it doesn't work in ff. (that is, in firefox, non-text ads show up fine, while text-ads don't. [or at least, not in a reliable way.]) any ideas what would be causing this? Thanks! -thomas woelfer

    Read the article

  • Capture ASP output for monitoring

    - by scourge.zero
    How do I Capture ASP.NET output and then store it as temp memory so that I can use them in an application to do comparison. example. there's this site which has ASP output. Sorry I do not have server access, what I can do is view the output. The site by the way is a monitor for all users logged in and in which ever channel. output e.g. Channel 1 Username logged in (0 / 1) Username 1 1 John Smith 1 George B 0 Channel 2 Username logged in (0 / 1) Username 1 1 John Smith 0 George B 0 what I wanted to do is to capture this output and then show them this way. Username Channel 1 Channel 2 Total Username 1 1 1 2 John Smith 1 0 1 George B 0 0 0 I dont knw where to start.

    Read the article

  • Cake php menu generation

    - by RSK
    hai everyone........ I am trying to generate dynamic menu according to the user permission given with ACL component in cake php.. ie., if a user logins, i need to check which all actions are permitted for that specific user and according that list of actions i need to generate menu can any one help me to get all the permitted actions from the acos,aros,acos_aros tables thanks in advance

    Read the article

  • mysql count, distinct, join? COnfusion

    - by calum
    hello i have 2 tables: tblItems ID | orderID | productID 1 1 2 2 1 2 3 2 1 4 3 2 tblProducts productID | productName 1 ABC 2 DEF im attempting to find the most popular Product based on whats in "tblItems", and display the product Name and the number of times it appears in the tblItems table. i can get mysql to count up the total like: $sql="SELECT COUNT(productID) AS CountProductID FROM tblItems"; but i can't figure out how to join the products table on..if i try LEFT JOIN the query goes horribly wrong hopefully thats not too confusing..thankss

    Read the article

  • Using perl to parse a file and insert specific values into a database

    - by Sean
    Disclaimer: I'm a newbie at scripting in perl, this is partially a learning exercise (but still a project for work). Also, I have a much stronger grasp on shell scripting, so my examples will likely be formatted in that mindset (but I would like to create them in perl). Sorry in advance for my verbosity, I want to make sure I am at least marginally clear in getting my point across I have a text file (a reference guide) that is a Word document converted to text then swapped from Windows to UNIX format in Notepad++. The file is uniform in that each section of the file had the same fields/formatting/tables. What I have planned to do, in a basic way is grab each section, keyed by unique batch job names and place all of the values into a database (or maybe just an excel file) so all the fields can be searched/edited for each job much easier than in the word file and possibly create a web interface later on. So what I want to do is grab each section by doing something like: sed -n '/job_name_1_regex/,/job_name_2_regex/' file.txt --how would this be formatted within a perl script? (grab the section in total, then break it down further from there) To read the file in the script I have open FORMAT_FILE, 'test_format.txt'; and then use foreach $line (<FORMAT_FILE>) to parse the file line by line. --is there a better way? My next problem is that since I converted from a word doc with tables, which looks like: Table Heading 1 Table Heading 2 Heading 1/Value 1 Heading 2/Value 1 Heading 1/Value 2 Heading 2/Value 2 but the text file it looks like: Table Heading 1 Table Heading 2Heading 1/Value 1Heading 1/Value 2Heading 2/Value 1Heading 2/Value 2 So I want to have "Heading 1" and "Heading 2" as a columns name and then put the respective values there. I just am not sure how to get the values in relation to the heading from the text file. The values of Heading 1 will always be the line number of Heading 1 plus 2 (Heading 1, Heading 2, Values for heading 1). I know this can be done in awk/sed pretty easily, just not sure how to address it inside a perl script. After I have all the right values and such, linking it up to a database may be an issue as well, I haven't started looking at the way perl interacts with DBs yet. Sorry if this is a bit scatterbrained...it's still not fully formed in my head.

    Read the article

  • where has sun mysql database manager gone???

    - by opensas
    If I recall correctly, there where at least to desktop programas from sun which were very useful for handling mysql databases... Now, all I can find is some mysql workbench which is only useful for designing data... Both programs I'm talking about allowed you to manage servers, create database, create tables, index, perform querys, edit data, etc... unfortunately I don't even recall there names... Any idea where I can find them? thanks a lot

    Read the article

  • sql and linq query

    - by ile
    Database has tables Photos and PhotoAlbums. I need query that will select all albums and only one photo from each album. I need SQL and LINQ version of this query. Thanks in advance.

    Read the article

  • XCode and Instruments for memory leaks

    - by coure06
    I have tested my iphone app using XCode and instrument. I am watching the memory allocation tables, its showing me that every things is increasing i.e Bytes, Live Bytes and so on. What does it means? Am i not deallocating the objects? Can i find which objects are not deallocating using instruments?

    Read the article

  • How do I get the entity framework to work with archive flags?

    - by Orion Adrian
    I'm trying to create a set of tables where we don't actually delete them, but rather we set the archive flags instead. When we delete an entity, it shouldn't be deleted, it should be marked as archived instead. What are the programming patterns to support this? I would also prefer not to have to roll out my own stored procs for every table that have these archive flags if there is another solution.

    Read the article

  • Virtual fiber channel HBA in Solaris

    - by Phil
    We are trying to set up some virtual Fibre Channel HBA's in Solaris. This seems to be possible with NPIV. Creating NPIV's in a Solaris global zone works fine, but passing that NPIV to a zone didn't work at all. We tried to pass the NPIV as following: # zonecfg -z zone1 'info' zonename: studentz1 [...] device: match: /devices/pci@0,0/pci8086,25f9@6/pci8086,350c@0,3/pci1077,140@4/fp@1,0:devctl allow-partition not specified allow-raw-io not specified Wat we want to do is, set up an environment for SAN exercises. We don't have a physical host per student, so we try to virtualise that in some way (Solaris zones or VMware). It should be possible to display the WWN of the virtual HBA and mount the storage presented by the disk subsystem. Any ideas to pass the NPIV to a solaris zone or to virtualise this with vmware?

    Read the article

  • Best practices: Sending email on behalf of users

    - by Ben Doom
    The company I work for provides testing services for the healthcare industry. As part of our services, we need to send email to our clients' employees. Typically, these are temp, part-time, or contract employees, and so have private email addresses (eg Hotmail, GMail, Yahoo!, etc). Up to now, we've been sending from an internal address, but this means that replies come back to us when employees aren't paying attention or don't know to send queries to our clients. I'd like to change this, so that the person who requests that the email is sent is the person that is replied to. We've used reply-to: in the past, but it seemed to cause additional mail to be trapped by spam filters. I've been reading about sender: and on-behalf-of: headers, and was wondering what the current best-practice was for sending email in a scenario where we need to send email such that the reply goes to a domain we don't control.

    Read the article

  • Using SQL Server Intergration Services (SSIS) can you read a file from a FileStream column in SQL Se

    - by tbrovold
    I am trying to create a tool that I can upload different types of files "csv", Excel, XML and load those files into a FileStream column in the database as "Source" untouched over the web. Then using SSIS on the server I want to create a package that will process that file to be loaded into other tables to be used by the web application. Is it possible to have SSIS read a file from FileStream column? if so how?

    Read the article

  • DataTable.WriteXml on background thread

    - by Sheraz KHan
    I am trying to serealize DataTables in a background thread and it's failing. Any idea [Test] public void Async_Writing_DataTables() { string path = @"C:\Temp\SerialzeData"; if (!Directory.Exists(path)) { Directory.CreateDirectory(path); } Assert.IsTrue(Directory.Exists(path)); Thread thread1 = new Thread(new ThreadStart(delegate { object lockObject = new object(); for (int index = 0; index < 10; index++) { lock (lockObject) { DataTable table = new DataTable("test"); table.WriteXml(Path.Combine(path, table.TableName + index + ".xml")); } } })); thread1.Start(); }

    Read the article

  • Email not working on testing site in Plesk before DNS switch

    - by Dilip Rajkumar
    I have to test my website before my DNS is swtiched to the new server. My New server is having Plesk. I changed my hosts file to point to the new server and I tested the site. My Site is working fine. However, When I try to register I have 2 emails sent one to the user and one to the admin. Admin email id is same as the server name for example my site name is test.com the admin email is [email protected]. So email is not sent to the admin. I know the email is not sent because the Plesk his searching its own dns instead of Global public DNS. Do any one know how to make my site see public DNS on sending email in Plesk. If I set Goole Public DNS 8.8.8.8 for MS record will it work.. Please guide me. Thanks in advance..

    Read the article

  • SQL Server 2005 Table Alter History

    - by Kayes
    Hi. Does SQL Server maintains any history to track table alterations like column add, delete, rename, type/ length change etc? I found many suggest to use stored procedures to do this manually. But I'm curious if SQL Server keeps such history in any system tables? Thanks.

    Read the article

  • How can I configure MediaWiki to display numbers in titles?

    - by Wikis
    We would like to display numbers in the titles in our MediaWiki wiki. Specifically: in the table of contents numbers are shown like this: 1 Title 1.1 Subtitle 2 Another title However, on the page the titles (that map to the table of contents) appear like this: Title text Subtitle text Another title text That is, no numbers are shown. How can we show numbers in the page contents? There are three possible ways that this could be answered (that I can think of). In our order of preference, they are: Setting per user Setting per page Global setting (for all users)

    Read the article

  • Sqlite table constraint - unique on multiple columns

    - by Rich
    I can find syntax "charts" on this on the sqlite website, but no examples and my code is crashing. I have other tables with unique constraints on a single column, but I want to add a constraint to the table on two columns. This is what I have that is causing a SQLiteException with the message "syntax error". CREATE TABLE name (column defs) UNIQUE (col_name1, col_name2) ON CONFLICT REPLACE I'm doing this based on the following: table-constraint

    Read the article

  • DB Designer creates compound primary key

    - by Jon Winstanley
    When adding relationships to a database model in DB Designer 4, a composite primary key is being created every time. So every foreign key I add, I get an extra key added to a composite primary key. I think I must have changed a setting as I don't remember it doing this in the past. Does anyone know how to turn off this feature as I prefer to use a single surrogate primary keys in my database tables?

    Read the article

  • What's the best way to store a MySQL database in source control?

    - by Marplesoft
    I am working on an application with a few other people and we'd like to store our MySQL database in source control. My thoughts are two have two files: one would be the create script for the tables, etc, and the other would be the inserts for our sample data. Is this a good approach? Also, what's the best way to export this information? Also, any suggestions for workflow in terms of ways to speed up the process of making changes, exporting, updating, etc.

    Read the article

  • Unicode replacement characters for text matching

    - by Christian Harms
    I have some fun with unicode text sources (all correct encodet) and I want to match names. The classic problem, one source comes correctly, an other has more flatten names: "Elblag" vs. "Elblag" (see the character a) How can I "flatten" a, á, â or à to a for better matching? Are there unicode to ascii- matching tables?

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >