Search Results

Search found 58440 results on 2338 pages for 'data cleansing'.

Page 663/2338 | < Previous Page | 659 660 661 662 663 664 665 666 667 668 669 670  | Next Page >

  • create a text table with php using fixed width font

    - by Hintswen
    I want to write a PHP script to output some data as a plain-text table using spaces to get the data into columns (just like the Linux top command). I can't use a HTML table as the script output will be saved to disk and viewed in a plain-text editor. Is there anything available that can do this automatically (format the data into columns)?

    Read the article

  • How to cache YUI DataSource?

    - by Matt McCormick
    I'm setting up a YUI DataTable with filtering by following the steps on the YUI site However, I am using JSON as the DataSource ResponseType. When I type in a value to filter, the request will be sent to the server again. I find this to be wasteful as all the data has already been retrieved the first time. Is there a way to cache the initial data returned and then filter only according to that data so another AJAX request does not have to be made?

    Read the article

  • Using multiple named outlets and a wrapper view with no content in Emberjs

    - by user1889776
    I'm trying to use multiple named outlets with Ember.js. Is my approach below correct? Markup: <script type="text/x-handlebars" data-template-name="application"> <div id="mainArea"> {{outlet main_area}} </div> </script> <script type="text/x-handlebars" data-template-name="home"> <ul id="sections"> {{outlet sections}} </ul> <ul id="categories"> {{outlet categories}} </ul> </script> <script type="text/x-handlebars" data-template-name="sections"> {{#each section in controller}} <li><img {{bindAttr src="section.image"}}></li> {{/each}} </script> <script type="text/x-handlebars" data-template-name="categories"> {{#each category in controller}} <img {{bindAttr src="category.image"}}> {{/each}} </script>? JS Code: Here I set the content of the various controllers to data grabbed from a server and connect outlets with their corresponding views. Since the HomeController has no content, set its content to an empty object - a hack to get the rid of this error message: Uncaught Error: assertion failed: Cannot delegate set('categories' ) to the 'content' property of object proxy : its 'content' is undefined. App.Router = Ember.Router.extend({ enableLogging: false, root: Ember.Route.extend({ index: Ember.Route.extend({ route: '/', connectOutlets: function(router){ router.get('sectionsController').set('content',App.Section.find()); router.get('categoriesController').set('content', App.Category.find()); router.get('applicationController').connectOutlet('main_area', 'home'); router.get('homeController').connectOutlet('home', {}); router.get('homeController').connectOutlet('categories', 'categories'); router.get('homeController').connectOutlet('sections', 'sections'); } }) }) });

    Read the article

  • populate textboxs with xml node attributes

    - by Doug
    I have Data.xml: <?xml version="1.0" encoding="utf-8" ?> <data> <album> <slide title="Autum Leaves" description="Leaves from the fall of 1986" source="images/Autumn Leaves.jpg" thumbnail="images/Autumn Leaves_thumb.jpg" /> <slide title="Creek" description="Creek in Alaska" source="images/Creek.jpg" thumbnail="images/Creek_thumb.jpg" /> </album> </data> I'd like to be able to edit the attributes of each Slide node via GridView (that has a "Select" column added.) And so far I have: protected void GridView1_SelectedIndexChanged(object sender, EventArgs e) { int selectedIndex = GridView1.SelectedIndex; LoadXmlData(selectedIndex); } private void LoadXmlData(int selectedIndex) { XmlDocument xmldoc = new XmlDocument(); xmldoc.Load(MapPath(@"..\photo_gallery\Data.xml")); XmlNodeList nodelist = xmldoc.DocumentElement.ChildNodes; XmlNode xmlnode = nodelist.Item(selectedIndex); titleTextBox.Text = xmlnode.Attributes["title"].InnerText; descriptionTextBox.Text = xmlnode.Attributes["description"].InnerText; sourceTextBox.Text = xmlnode.Attributes["source"].InnerText; thumbTextBox.Text = xmlnode.Attributes["thumbnail"].InnerText; The code for LoadXmlData is just a guess on my part - I'm new to working with xml in this way. I'd like have the user to slected the row from the gridview, then populate a set of text boxes with each slide attributed for updating back to the Data.xml file. The error I'm getting is "Object reference not set to an instance of an object" at the line: titleTextBox.Text = xmlnode.Attributes["@title"].InnerText; - so I'm not reaching the attribute "title" of the slide node. Thanks for any ideas you may have.

    Read the article

  • The confusion on python encoding

    - by zhangzhong
    I retrieved the data encoded in big5 from database,and I want to send the data as email of html content, the code is like this: html += """<tr><td>""" html += """unicode(rs[0], 'big5')""" # rs[0] is data encoded in big5 I run the script, but the error raised: UnicodeDecodeError: 'ascii' codec can't decode byte...... However, I tried the code in interactive python command line, there are no errors raised, could you give me the clue?

    Read the article

  • getjson and servers

    - by gheil.apl
    Is getJSON only useful for those who control a server? Is there any other alternative for getting data from a file? i tried $.getJSON("good.json", function(data){ out=out+"good.json: "+data + ","; }); where good.json = {"a":"1","b":"2"} and got a result of 'null' for data. These all are valid JSON files, and all give null when used in the above: good.htm assoc.json assoc.js stub.json stub.js test.js test.txt and all get a null result... The above is in an interactive setting at http://jsbin.com/dbJSON/8/edit the output (of null) is to be had by clicking 'output'.

    Read the article

  • Extracting Certain XML Elements with PHP SimpleXML

    - by Peter
    I am having some problems parsing this piece of XML using SimpleXML. There is always only one Series element, and a variable number of Episode elements beneath. I want to parse XML so I can store the Series data in one table, and all the Episode data in another table. XML: <Data> <Series> <id>80348</id> <Genre>|Action and Adventure|Comedy|Drama|</Genre> <IMDB_ID>tt0934814</IMDB_ID> <SeriesID>68724</SeriesID> <SeriesName>Chuck</SeriesName> <banner>graphical/80348-g.jpg</banner> </Series> <Episode> <id>935481</id> <Director>Robert Duncan McNeill</Director> <EpisodeName>Chuck Versus the Third Dimension 2D</EpisodeName> <EpisodeNumber>1</EpisodeNumber> <seasonid>27984</seasonid> <seriesid>80348</seriesid> </Episode> <Episode> <id>935483</id> <Director>Robert Duncan McNeill</Director> <EpisodeName>Buy More #15: Employee Health</EpisodeName> <EpisodeNumber>2</EpisodeNumber> <seasonid>27984</seasonid> <seriesid>80348</seriesid> </Episode> </Data> When I attempt to access just the first Series element and child nodes, or iterate through the Episode elements only it does not work. I have also tried to use DOMDocument with SimpleXML, but could not get that to work at all. PHP Code: <?php if(file_exists('en.xml')) { $data = simplexml_load_file('en.xml'); foreach($data as $series) { echo 'id: <br />' . $series->id; echo 'imdb: <br />' . $series->IMDB_ID; } } ?> Output: id:80348 imdb:tt0934814 id:935481 imdb: id:1534641 imdb: Any help would be greatly appreciated.

    Read the article

  • Interfacing with Compris POS

    - by Sargun Dhillon
    Does anyone have any data on how to interface with a Compris POS? I have a Compris POS, and I need to grab data from the database. I can't get information from NCR regarding the underlying data format, and I was wondering if anyone had reverse engineered the device, or had any documentation on the device.

    Read the article

  • how to change file permissions in postfix ?

    - by Zdanozdan
    Hi, In the postfix main.cf I have configured service to invoke external php script. smtp inet n - - - - smtpd -o content_filter=myservice:www-data myservice unix - n n - 1 pipe flags=Rq user=me null_sender= argv=/home/me/my_script.php so far so good, my_script.php is executed. It creates the file my_file.txt in home dir. However I can only manage -rw------- 1 me www-data 16 2009-10-11 19:35 my_file.txt How do I add 'r' permissions for www-data group ?

    Read the article

  • How do I print out objects in an array in python?

    - by Jonathan
    I'm writing a code which performs a k-means clustering on a set of data. I'm actually using the code from a book called collective intelligence by O'Reilly. Everything works, but in his code he uses the command line and i want to write everything in notepad++. As a reference his line is >>>kclust=clusters.kcluster(data,k=10) >>>[rownames[r] for r in k[0]] Here is my code: from PIL import Image,ImageDraw def readfile(filename): lines=[line for line in file(filename)] # First line is the column titles colnames=lines[0].strip( ).split('\t')[1:] rownames=[] data=[] for line in lines[1:]: p=line.strip( ).split('\t') # First column in each row is the rowname rownames.append(p[0]) # The data for this row is the remainder of the row data.append([float(x) for x in p[1:]]) return rownames,colnames,data from math import sqrt def pearson(v1,v2): # Simple sums sum1=sum(v1) sum2=sum(v2) # Sums of the squares sum1Sq=sum([pow(v,2) for v in v1]) sum2Sq=sum([pow(v,2) for v in v2]) # Sum of the products pSum=sum([v1[i]*v2[i] for i in range(len(v1))]) # Calculate r (Pearson score) num=pSum-(sum1*sum2/len(v1)) den=sqrt((sum1Sq-pow(sum1,2)/len(v1))*(sum2Sq-pow(sum2,2)/len(v1))) if den==0: return 0 return 1.0-num/den class bicluster: def __init__(self,vec,left=None,right=None,distance=0.0,id=None): self.left=left self.right=right self.vec=vec self.id=id self.distance=distance def hcluster(rows,distance=pearson): distances={} currentclustid=-1 # Clusters are initially just the rows clust=[bicluster(rows[i],id=i) for i in range(len(rows))] while len(clust)>1: lowestpair=(0,1) closest=distance(clust[0].vec,clust[1].vec) # loop through every pair looking for the smallest distance for i in range(len(clust)): for j in range(i+1,len(clust)): # distances is the cache of distance calculations if (clust[i].id,clust[j].id) not in distances: distances[(clust[i].id,clust[j].id)]=distance(clust[i].vec,clust[j].vec) #print 'i' #print i #print #print 'j' #print j #print d=distances[(clust[i].id,clust[j].id)] if d<closest: closest=d lowestpair=(i,j) # calculate the average of the two clusters mergevec=[ (clust[lowestpair[0]].vec[i]+clust[lowestpair[1]].vec[i])/2.0 for i in range(len(clust[0].vec))] # create the new cluster newcluster=bicluster(mergevec,left=clust[lowestpair[0]], right=clust[lowestpair[1]], distance=closest,id=currentclustid) # cluster ids that weren't in the original set are negative currentclustid-=1 del clust[lowestpair[1]] del clust[lowestpair[0]] clust.append(newcluster) return clust[0] def kcluster(rows,distance=pearson,k=4): # Determine the minimum and maximum values for each point ranges=[(min([row[i] for row in rows]),max([row[i] for row in rows])) for i in range(len(rows[0]))] # Create k randomly placed centroids clusters=[[random.random( )*(ranges[i][1]-ranges[i][0])+ranges[i][0] for i in range(len(rows[0]))] for j in range(k)] lastmatches=None for t in range(100): print 'Iteration %d' % t bestmatches=[[] for i in range(k)] # Find which centroid is the closest for each row for j in range(len(rows)): row=rows[j] bestmatch=0 for i in range(k): d=distance(clusters[i],row) if d<distance(clusters[bestmatch],row): bestmatch=i bestmatches[bestmatch].append(j) # If the results are the same as last time, this is complete if bestmatches==lastmatches: break lastmatches=bestmatches # Move the centroids to the average of their members for i in range(k): avgs=[0.0]*len(rows[0]) if len(bestmatches[i])>0: for rowid in bestmatches[i]: for m in range(len(rows[rowid])): avgs[m]+=rows[rowid][m] for j in range(len(avgs)): avgs[j]/=len(bestmatches[i]) clusters[i]=avgs return bestmatches

    Read the article

  • Views performance in MySQL

    - by Gianluca Bargelli
    I am currently writing my truly first PHP Application and i would like to know how to project/design/implement MySQL Views properly; In my particular case User data is spread across several tables (as a consequence of Database Normalization) and i was thinking to use a View to group data into one large table: CREATE VIEW `Users_Merged` ( name, surname, email, phone, role ) AS ( SELECT name, surname, email, phone, 'Customer' FROM `Customer` ) UNION ( SELECT name, surname, email, tel, 'Admin' FROM `Administrator` ) UNION ( SELECT name, surname, email, tel, 'Manager' FROM `manager` ); This way i can use the View's data from the PHP app easily but i don't really know how much this can affect performance. For example: SELECT * from `Users_Merged` WHERE role = 'Admin'; Is the right way to filter view's data or should i filter BEFORE creating the view itself? (I need this to have a list of users and the functionality to filter them by role).

    Read the article

  • R: Why does read.table stop reading a file?

    - by Mike Dewar
    I have a file, called genes.txt, which I'd like to become a data.frame. It's got a lot of lines, each line has three, tab delimited fields: mike$ wc -l genes.txt 42476 genes.txt I'd like to read this file into a data.frame in R. I use the command read.table, like this: genes = read.table( genes_file, sep="\t", na.strings="-", fill=TRUE, col.names=c("GeneSymbol","synonyms","description") ) Which seems to work fine, where genes_file points at genes.txt. However, the number of lines in my data.frame is significantly less than the number of lines in my text file: > nrow(genes) [1] 27896 and things I can find in the text file: mike$ grep "SELL" genes.txt SELL CD62L|LAM1|LECAM1|LEU8|LNHR|LSEL|LYAM1|PLNHR|TQ1 selectin L don't seem to be in the data.frame > grep("SELL",genes$GeneSymbol) integer(0) it turns out that genes = read.delim( genes_file, header=FALSE, na.strings="-", fill=TRUE, col.names=c("GeneSymbol","synonyms","description"), ) works just fine. Why does read.delim work when read.table not? If it's of use, you can recreate genes.txt using the following commands which you should run from a command line curl -O ftp://ftp.ncbi.nlm.nih.gov/gene/DATA/gene_info.gz gzip -cd gene_info.gz | awk -Ft '$1==9606{print $3 "\t" $5 "\t" $9}' > genes.txt be warned, though, that gene_info.gz is 101MBish.

    Read the article

  • Testing ASP.NET security in Firefox

    - by blahblah
    I'm not sure whether this question belongs on StackOverflow or SuperUser, but here goes nothing... I'm trying to test out some basic security problems on my personal ASP.NET website to see exactly how the custom validators, etc. work when tampering with the data. I've been looking at the Firefox extension TamperData which seems to do the trick, but it doesn't feel very professional at all. The issues I'm having with TamperData is that the textbox for the POST data is way too small to hold the ASP.NET view-state, so I have to copy that data into Emacs and then back again to be productive at all. I also don't like that there doesn't seem to be an option to only tamper with data which is from/to localhost. Any ideas on better extensions for the task or better methods to test it?

    Read the article

  • Server crash = How does a TCP/IP (and the browser-client) behave after this?

    - by jens
    Hello Experts, i would be thankfull for an explanation what happens with HTTP(TCP/IP) transmissions when the server crashes unexpectedly, how does the client Browser (Firefox / IE) handle this event. What happens in the following two standard cases: Clients-actively sends data: The TCP/IP Connection has been estableshed and the Client (Web-Browser) is Sending a POST Request with some data and in the middle of the process of sending the server crashes. What does this mean for the client? As far as I know TCP/IP does not "acknowledge" a send data-package so the client does not know that the server crashed. How will the client behave? (Firefox and Internet Explorer)? The Server is actively sending data: As above the tcp/ip connection has been established and the Server is sending a large website to the client (browser). In the middle of the sending-process the server crashes, so no futher packets are sent. How does the client browser react to this event (Firefox and Interne Expolrer) Thank you very much!! Jens

    Read the article

  • Why can't I build this Javascript object?

    - by Alex Mcp
    I have an object I'm trying to populate from another object (that is, iterate over a return object to produce an object with only selected values from the original). My code looks like this: var collect = {}; function getHistoricalData(username){ $.getJSON("http://url/" + username + ".json?params", function(data){ for (var i=0; i < data.length; i++) { console.log(i); collect = { i : {text : data[i].text}}; $("#wrap").append("<span>" + data[i].text + "</span><br />"); }; console.log(collect); }); } So I'm using Firebug for debugging, and here's what I know: The JSON object is intact console.log(i); is showing the numbers 1-20 as expected When I log the collect object at the end, it's structure is this: var collect = { i : {text : "the last iteration's text"}}; So the incrementer is "applying" to the data[i].text and returning the text value, but it's not doing what I expected, which is create a new member of the collect object; it's just overwriting collect.i 20 times and leaving me with the last value. Is there a different syntax I need to be using for assigning object members? I tried collect.i.text = and collect[i].text = and the error was that whatever I tried was undefined. I'd love to know what's going on here, so the more in-depth an explanation the better. Thanks!

    Read the article

  • How can I append text to a start and end of a file as header and footer ?

    - by Morano88
    I'm filling an XML document manually using C# and I need to have <data> as header and </data> as footer for the whole XML file. Is there an easy way to do that ? I know It can be done but I couldn't find a way to do it. Keep in mind that I'm updating the entries so I need to make sure that they always come between the header and the footer. Thanks Example <data> Entry New Entry 1 New Entry 2 </data>

    Read the article

  • What MS technology to use for HTTP service returning XML?

    - by Borek
    I need to create a service that: accepts HTTP requests (with query string or HTTP POST parameters) does some processing on the requests (checking if the request is valid, authentication etc.) reads data from a custom store (another HTTP call in our case) returns the result as custom XML (defined with XSD) I'm trying to think of various MS technologies that could help me and how good they would be for this scenario (pretty standard one I guess). The tasks above are relatively separate, this is what comes to mind: HTTP front-end: ASP.NET Web Forms ASP.NET MVC (seems more appropriate here as I won't need server controls, view state etc.) WCF? Don't know much about it or how well it would suit my task. Custom logic on the server: this will probably be a generic C# code in all cases (sometimes "plugged into" or called from MVC controllers or some equivalent place in other technologies) Reading data from internal data stores: As said, this is another HTTP server in our case. Options that come to mind: Just read the data using something like WebClient (Just theoretically) implement a LINQ provider (Just even more theoretically) implement an EF provider Output the data as custom XML: Linq2XML Serialization? Is it flexible enough? Does WCF provide some tools for this? Some "OXM" - Object/XML mapper if there is something like that for .NET I may be wrong in many of my assumptions, this is just a quick list that comes to mind after a quick research. Some general notes / questions: Testing is important Solution with a clear domain model would be much preferred over the one without Can Entity Framework actually help somewhere in my scenario? If so, where and how? Would WCF be an appropriate technology for this? I don't know much about it.

    Read the article

  • Sanitizing user input that will later be e-mailed - what should I be worried about?

    - by Kevin Burke
    I'm interning for an NGO in India (Seva Mandir, http://sevamandir.org) and trying to fix their broken "subscribe to newsletter" box. Because the staff isn't very sophisticated and our web host isn't great, I decided to send the relevant data to the publications person via mail() instead of storing it in a MySQL database. I know that it's best to treat user input as malicious, and I've searched the SO forums for posts relevant to escaping user data for sending in a mail message. I know the data should be escaped; what should I be worried about and what's the best way to sanitize the input before emailing it? Form flow: 1. User enters email on homepage and clicks Submit 2. User enters name, address, more information on second page (bad usability, I know, but my boss asked me to) and clicks "Submit" 3. Collect the data via $_POST and email it to the publications editor (and possibly send a confirmation to the subscriber). I am going to sanitize the email in step 2 and the other data in step 3. I appreciate your help, Kevin

    Read the article

  • Problem in loading cities using JSON

    - by Saravanan I M
    I am using Geonames for loading the cities using JSON. Geonames data i imported into my database. I am using MS SQL 2008 Server. I display a dropdown for country select. Once the user select the country. I am loading all the cities into the autocomplete textbox. I am facing a delay in the getJSON method. Also it seems like executing asynchronous. So before getting all the data my autocompelte is getting filled. Below is my complete script. I think i have some problem in the loop. Please advice me what i am doing wrong in my code. $(document).ready(function () { $("#ShowLoad").hide(); //Hook onto the MakeID list's onchange event $("#Country").change(function () { findcities = []; $("#ShowLoad").show(); $("#HomeTown").unautocomplete(); var url = '<%= Url.Content("~/") %>' + "Location/GetCitiesCountByCountry/" + $("#Country").val(); $.getJSON(url, null, function (data) { var total = data; if (total > 0) { var pageTotal = Math.ceil(total / 1000); var isFilled = false; for (var i = 0; i < pageTotal; i++) { var skip = i == 1 ? 0 : (i * 1000) - 1000; var url = '<%= Url.Content("~/") %>' + "Location/GetCitiesByCountry/" + $("#Country").val() + "?skip=" + skip; //alert(i); $.getJSON(url, null, function (data) { $.each(data.Cities, function (index, optionData) { if ($("#Country").val() == "US") { findcities.push(optionData.asciiname + "," + optionData.admin1_code); } else { findcities.push(optionData.asciiname); } }); if (i == pageTotal) { //alert(findcities); $("#ShowLoad").hide(); $("#HomeTown").focus().autocomplete(findcities); } }); } $("#HomeTown").setOptions({ max: 100000 }); } }); }).change(); });

    Read the article

  • Can I create a custom class that inherits from a strongly typed DataRow?

    - by Calvin Fisher
    I'm working on a huge, old project with a lot of brittle code, some of which has been around since the .NET 1.0 era, and it has been and will be worked on by other people... so I'd like to change as little as possible. I have one project in my solution that contains DataSet.xsd. This project compiles to a separate assembly (Data.dll). The database schema includes several tables arranged more or less hierarchically, but the only way the tables are actually linked together is through joins. I can get, e.g. DepartmentRow and EmployeeRow objects from the autogenerated code. EmployeeRow contains information from the employee's corresponding DepartmentRow through a join. I'm making a new report to view multiple departments and all their employees. If I use the existing data access scheme, all I will be able to get is a spreadsheet-like output where each employee is represented on one line, with department information repeated over and over in its appropriate columns. E.g.: Department1...Employee1... Department1...Employee2... Department2...Employee3... But what the customer would like is to have each department render like a heading, with a list of employees beneath each. E.g.: - Department1... Employee1... Employee2... + Department2... I'm trying to do this by inheriting hierarchical objects from the autogenerated Row objects. E.g.: public class Department : DataSet.DepartmentRow { public List<Employee> Employees; } That way I could nest the data in the report by using a collection of Department objects as the DataSource, each of which will put its list of Employees in a subreport. The problem is that this gives me a The type Data.DataSet.DepartmentRow has no constructors defined error. And when I try to make a constructor, e.g. public class Department : DataSet.DepartmentRow { private Department() { } public List<Employee> Employees; } I get a 'Data.DataSet.DepartmentRow(System.Data.DataRowBuilder)' is inaccessible due to its protection level. error in addition to the first one. Is there a way to accomplish what I'm trying to do? Or is there something else I should be trying entirely?

    Read the article

  • Strategy Pattern with Type Reflection affecting Performances ?

    - by Aurélien Ribon
    Hello ! I am building graphs. A graph consists of nodes linked each other with links (indeed my dear). In order to assign a given behavior to each node, I implemented the strategy pattern. class Node { public BaseNodeBehavior Behavior {get; set;} } As a result, in many parts of the application, I am extensively using type reflection to know which behavior a node is. if (node.Behavior is NodeDataOutputBehavior) workOnOutputNode(node) .... My graph can get thousands of nodes. Is type reflection greatly affecting performances ? Should I use something else than the strategy pattern ? I'm using strategy because I need behavior inheritance. For example, basically, a behavior can be Data or Operator, a Data behavior can IO, Const or Intermediate and finally an IO behavior can be Input or Output. So if I use an enumeration, I wont be able to test for a node behavior to be of data kind, I will need to test it to be [Input, Output, Const or Intermediate]. And if later I want to add another behavior of Data kind, I'm screwed, every data-testing method will need to be changed.

    Read the article

  • I cannot connect to SQL Server Express using VB.NET

    - by konin
    Can someone tell me what I am missing? I am using this connection string to connect to my database and still it won't connect: Dim str As String = "Provider = .NET Framework Data Provider for SQL Server; Data Source=C:\Users\konin\Documents\UHMS\bin\Debug\UHMS.mdf;Integrated Security=True;Connect Timeout=30;User Instance=True" This is the process I used to get the data source: right-click the database select properties and click select data source I hope I am clear enough. Thanks for reading. Edit: Error Message is as follows: unable to connect to database please contact administrator

    Read the article

  • nginx reverse proxy cannot access apache virtual hosts

    - by Sc0rian
    I am setting up nginx as a reverse proxy. The server runs on directadmin and lamp stack. I have nginx running on port 81. I can access all my sites (including virtual ips) on the port 81. However when I forward the traffic from port 80 to 81, the virtual ips have a message saying "Apache is running normally". Server IPs are fine, and I can still access virtual IP's on 81. [root@~]# netstat -an | grep LISTEN | egrep ":80|:81" tcp 0 0 <virtual ip>:81 0.0.0.0:* LISTEN tcp 0 0 <virtual ip>:81 0.0.0.0:* LISTEN tcp 0 0 <serverip>:81 0.0.0.0:* LISTEN tcp 0 0 :::80 :::* LISTEN apache 24090 0.6 1.3 29252 13612 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24092 0.9 2.1 39584 22056 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24096 0.2 1.9 35892 20256 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24120 0.3 1.7 35752 17840 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24495 0.0 1.4 30892 14756 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24496 1.0 2.1 39892 22164 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24516 1.5 3.6 55496 38040 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24519 0.1 1.2 28996 13224 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24521 2.7 4.0 58244 41984 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24522 0.0 1.2 29124 12672 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24524 0.0 1.1 28740 12364 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24535 1.1 1.7 36008 17876 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24536 0.0 1.1 28592 12084 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24537 0.0 1.1 28592 12112 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24539 0.0 0.0 0 0 ? Z 18:35 0:00 [httpd] <defunct> apache 24540 0.0 1.1 28592 11540 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24541 0.0 1.1 28592 11548 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL root 24548 0.0 0.0 4132 752 pts/0 R+ 18:35 0:00 egrep apache|nginx root 28238 0.0 0.0 19576 284 ? Ss May29 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf apache 28239 0.0 0.0 19888 804 ? S May29 0:00 nginx: worker process apache 28240 0.0 0.0 19888 548 ? S May29 0:00 nginx: worker process apache 28241 0.0 0.0 19736 484 ? S May29 0:00 nginx: cache manager process here is my nginx conf: cat /usr/local/nginx/conf/nginx.conf user apache apache; worker_processes 2; # Set it according to what your CPU have. 4 Cores = 4 worker_rlimit_nofile 8192; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; server_tokens off; access_log /var/log/nginx_access.log main; error_log /var/log/nginx_error.log debug; server_names_hash_bucket_size 64; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 30; gzip on; gzip_comp_level 9; gzip_proxied any; proxy_buffering on; proxy_cache_path /usr/local/nginx/proxy_temp levels=1:2 keys_zone=one:15m inactive=7d max_size=1000m; proxy_buffer_size 16k; proxy_buffers 100 8k; proxy_connect_timeout 60; proxy_send_timeout 60; proxy_read_timeout 60; server { listen <server ip>:81 default rcvbuf=8192 sndbuf=16384 backlog=32000; # Real IP here server_name <server host name> _; # "_" is for handle all hosts that are not described by server_name charset off; access_log /var/log/nginx_host_general.access.log main; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://<server ip>; # Real IP here client_max_body_size 16m; client_body_buffer_size 128k; proxy_buffering on; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 120; proxy_buffer_size 16k; proxy_buffers 32 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } } include /usr/local/nginx/vhosts/*.conf; } here is my vhost conf: # cat /usr/local/nginx/vhosts/1.conf server { listen <virt ip>:81 default rcvbuf=8192 sndbuf=16384 backlog=32000; # Real IP here server_name <virt domain name>.com ; # "_" is for handle all hosts that are not described by server_name charset off; access_log /var/log/nginx_host_general.access.log main; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://<virt ip>; # Real IP here client_max_body_size 16m; client_body_buffer_size 128k; proxy_buffering on; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 120; proxy_buffer_size 16k; proxy_buffers 32 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } Apache config: <VirtualHost xxxxxx:80 > ServerName www.<domain>.com ServerAlias www.<domain>.com <domain>.com ServerAdmin webmaster@<domain>.com DocumentRoot /home/<domain>/domains/<domain>.com/public_html ScriptAlias /cgi-bin/ /home/<domain>/domains/<domain>.com/public_html/cgi-bin/ UseCanonicalName OFF <IfModule !mod_ruid2.c> SuexecUserGroup <domain> <domain> </IfModule> <IfModule mod_ruid2.c> RMode config RUidGid <domain> <domain> RGroups apache access </IfModule> CustomLog /var/log/httpd/domains/<domain>.com.bytes bytes CustomLog /var/log/httpd/domains/<domain>.com.log combined ErrorLog /var/log/httpd/domains/<domain>.com.error.log <Directory /home/<domain>/domains/<domain>.com/public_html> Options +Includes -Indexes php_admin_flag engine ON php_admin_value sendmail_path '/usr/sbin/sendmail -t -i -f <domain>@<domain>.com' </Directory> <virtual ip address>:80 is a NameVirtualHost default server www.xx.com (/usr/local/directadmin/data/users/xx/httpd.conf:16) port 80 namevhost www.xx.com (/usr/local/directadmin/data/users/xx/httpd.conf:16) port 80 namevhost www.xx.co.uk (/usr/local/directadmin/data/users/xx/httpd.conf:107) port 80 namevhost www.xx.co.uk (/usr/local/directadmin/data/users/xx/httpd.conf:151) port 80 namevhost www.xx.co.uk (/usr/local/directadmin/data/users/xx/httpd.conf:195) <virtual ip address>:443 is a NameVirtualHost default server www.xx.com (/usr/local/directadmin/data/users/xx/httpd.conf:61) port 443 namevhost www.xx.com (/usr/local/directadmin/data/users/xx/httpd.conf:61) <server ip>:80 is a NameVirtualHost default server localhost (/etc/httpd/conf/extra/httpd-vhosts.conf:29) port 80 namevhost localhost (/etc/httpd/conf/extra/httpd-vhosts.conf:29) port 80 namevhost www.xx.co.uk (/usr/local/directadmin/data/users/admin/httpd.conf:16)

    Read the article

  • What are people's opinions vis-a-vis my choice of authorization plugins?

    - by brad
    I'm slowly but surely putting together my first rails app (first web-app of any kind in fact - I'm not really a programmer) and it's time to set up a user registration/login system. The nature of my app is such that each user will be completely separated from each other user (except for admin roles). When users log in they will have their own unique index page looking at only their data which they and no-one else can ever see or edit. However, I may later want to add a role for a user to be able to view and edit several other user's data (e.g. a group of users may want to allow their secretary to access and edit their data but their secretary would not need any data of their own). My plan is to use authlogic to create the login system and declarative authorization to control permissions but before I embark on this fairly major and crucial task I thought I would canvas a few opinions as to whether this combo was appropriate for the tasks I envisage or whether there would be a better/simpler/faster/cheaper/awesomer option.

    Read the article

  • Error in glmmadmb(.....) The function maximizer failed (couldn't find STD file)

    - by Joe King
    This works fine: fit.mc1 <-MCMCglmm(bull~1,random=~school,data=dt1,family="categorical", prior=list(R=list(V=1, fix=1), G=list(G1=list(V=1, nu=0))), slice=T) So does this: fit.glmer <- glmer(bull~(1|school),data=dt1,family=binomial) But now I am trying to work with the package glmmadmb and this does not work: fit.mc12 <- glmmadmb(bull~1+(1|school), data=dt1, family="binomial", mcmc=TRUE, mcmc.opts=mcmcControl(mcmc=50000)) It generates the error: Error in glmmadmb(bull~ 1 + (1 | school), data = dt1, family = "binomial", : The function maximizer failed (couldn't find STD file) In addition: Warning message: running command '<snip>\cmd.exe <snip>\glmmadmb.exe" -maxfn 500 -maxph 5 -noinit -shess -mcmc 5000 -mcsave 5 -mcmult 1' had status 1

    Read the article

< Previous Page | 659 660 661 662 663 664 665 666 667 668 669 670  | Next Page >