Search Results

Search found 10693 results on 428 pages for 'max requests'.

Page 49/428 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Proc causing a random TypeError

    - by go____yourself
    I'm refactoring some code and this proc is causing an error randomly and I don't know why or how to debug it... Any ideas? New code with proc defense_moves, offense_moves = [], [] determine_move = ->move,side,i { side << move.count(move[i]) } defense.size.times { |i| determine_move.(defense, defense_moves, i) } offense.size.times { |i| determine_move.(offense, offense_moves, i) } dm = defense[defense_moves.index(defense_moves.max)].nil? ? [0] : defense[defense_moves.index(defense_moves.max)] om = offense[offense_moves.index(offense_moves.max)].nil? ? [0] : offense[offense_moves.index(offense_moves.max)] Original code: d = 0 defense_moves = [] loop do defense_moves << defense.count(defense[d]) break if defense.count(defense[d]).zero? d += 1 end o = 0 offense_moves = [] loop do offense_moves << offense.count(offense[o]) break if offense.count(offense[o]).zero? o += 1 end dm = defense[defense_moves.index(defense_moves.max)].nil? ? [0] : defense[defense_moves.index(defense_moves.max)] om = offense[offense_moves.index(offense_moves.max)].nil? ? [0] : offense[offense_moves.index(offense_moves.max)] TypeError ttt2.rb:95:in `[]': no implicit conversion from nil to integer (TypeError) from ttt2.rb:95:in `computer_make_move' from ttt2.rb:133:in `draw_board' from ttt2.rb:24:in `place' from ttt2.rb:209:in `block in start_new_game' from ttt2.rb:188:in `loop' from ttt2.rb:188:in `start_new_game' from ttt2.rb:199:in `block in start_new_game' from ttt2.rb:188:in `loop' from ttt2.rb:188:in `start_new_game' from ttt2.rb:199:in `block in start_new_game' from ttt2.rb:188:in `loop' from ttt2.rb:188:in `start_new_game' from ttt2.rb:199:in `block in start_new_game' from ttt2.rb:188:in `loop' from ttt2.rb:188:in `start_new_game' from ttt2.rb:199:in `block in start_new_game' from ttt2.rb:188:in `loop' from ttt2.rb:188:in `start_new_game' from ttt2.rb:234:in `<main>'

    Read the article

  • calculate AUC (GAM) in R [migrated]

    - by ahmad
    I used the following script to calculate AUC in R: library(mgcv) library(ROCR) library(AUC) data1=read.table("d:\\2005.txt", header=T) GAM<-gam(tuna ~ s(chla)+s(sst)+s(ssha),family=binomial, data=data1) gampred<- predict(GAM, type="response") rp <- prediction(gampred, data1$tuna) auc <- performance( rp, "auc")@y.values[[1]] auc roc <- performance( rp, "tpr", "fpr") plot( roc ) But when I was running the script, the result is: **rp <- prediction(gampred, data1$tuna) Error in prediction(gampred, data1$tuna) : Format of predictions is invalid. > > auc <- performance( rp, "auc")@y.values[[1]] Error in performance(rp, "auc") : object 'rp' not found > auc function (x, min = 0, max = 1) { if (any(class(x) == "roc")) { if (min != 0 || max != 1) { x$fpr <- x$fpr[x$cutoffs >= min & x$cutoffs <= max] x$tpr <- x$tpr[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:length(x$fpr)) { ans <- ans + 0.5 * abs(x$fpr[i] - x$fpr[i - 1]) * (x$tpr[i] + x$tpr[i - 1]) } } else if (any(class(x) %in% c("accuracy", "sensitivity", "specificity"))) { if (min != 0 || max != 1) { x$cutoffs <- x$cutoffs[x$cutoffs >= min & x$cutoffs <= max] x$measure <- x$measure[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:(length(x$cutoffs))) { ans <- ans + 0.5 * abs(x$cutoffs[i - 1] - x$cutoffs[i]) * (x$measure[i] + x$measure[i - 1]) } } return(as.numeric(ans)) } <bytecode: 0x03012f10> <environment: namespace:AUC> > > roc <- performance( rp, "tpr", "fpr") Error in performance(rp, "tpr", "fpr") : object 'rp' not found > plot( roc ) Error in levels(labels) : argument "labels" is missing, with no default** Can anybody help me to solve this problem? Thank you in advance.

    Read the article

  • Why does redis report limit of 1024 files even after update to limits.conf?

    - by esilver
    I see this error at the top of my redis.log file: Current maximum open files is 1024. maxclients has been reduced to 4064 to compensate for low ulimit. I have followed these steps to the letter (and rebooted): Moreover, I see this when I run ulimit: ubuntu@ip-XX-XXX-XXX-XXX:~$ ulimit -n 65535 Is this error specious? If not, what other steps do I need to perform? I am running redis 2.8.13 (tip of the tree) on Ubuntu LTS 14.04.1 (again, tip of the tree). Here is the user info: ubuntu@ip-XX-XXX-XXX-XXX:~$ ps aux | grep redis root 1027 0.0 0.0 66328 2112 ? Ss 20:30 0:00 sudo -u ubuntu /usr/local/bin/redis-server /etc/redis/redis.conf ubuntu 1107 19.2 48.8 7629152 7531552 ? Sl 20:30 2:21 /usr/local/bin/redis-server *:6379 The server is therefore running as ubuntu. Here are my limits.conf file without comments: ubuntu@ip-XX-XXX-XXX-XXX:~$ cat /etc/security/limits.conf | sed '/^#/d;/^$/d' ubuntu soft nofile 65535 ubuntu hard nofile 65535 root soft nofile 65535 root hard nofile 65535 And here is the output of sysctl fs.file-max: ubuntu@ip-XX-XXX-XXX-XXX:~$ sysctl -a| grep fs.file-max sysctl: permission denied on key 'fs.protected_hardlinks' sysctl: permission denied on key 'fs.protected_symlinks' fs.file-max = 1528687 sysctl: permission denied on key 'kernel.cad_pid' sysctl: permission denied on key 'kernel.usermodehelper.bset' sysctl: permission denied on key 'kernel.usermodehelper.inheritable' sysctl: permission denied on key 'net.ipv4.tcp_fastopen_key' as sudo ubuntu@ip-10-102-154-226:~$ sudo sysctl -a| grep fs.file-max fs.file-max = 1528687 Also, I see this error at the top of the redis.log file, not sure if it's related. It makes sense that the ubuntu user isn't allowed to change max open files, but given the high ulimits I have tried to set he shouldn't need to: [1050] 23 Aug 21:00:43.572 # You requested maxclients of 10000 requiring at least 10032 max file descriptors. [1050] 23 Aug 21:00:43.572 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.

    Read the article

  • Apache https is slsow

    - by raucous12
    Hey, I've set apache up to use SSL with a self signed certificate. With http (KeepAlive off), I can get over 5000 requests per second. However, with https, I can only get 13 requests per second. I know there is supposed to be a bit of an overhead, but this seems abnormal. Can anyone suggest how I might go about debugging this. Here is the ab log for https: Server Software: Apache/2.2.3 Server Hostname: 127.0.0.1 Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,DHE-RSA-AES256-SHA,4096,256 Document Path: /hello.html Document Length: 29 bytes Concurrency Level: 5 Time taken for tests: 30.49425 seconds Complete requests: 411 Failed requests: 0 Write errors: 0 Total transferred: 119601 bytes HTML transferred: 11919 bytes Requests per second: 13.68 [#/sec] (mean) Time per request: 365.565 [ms] (mean) Time per request: 73.113 [ms] (mean, across all concurrent requests) Transfer rate: 3.86 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 190 347 74.3 333 716 Processing: 0 14 24.0 1 166 Waiting: 0 11 21.6 0 165 Total: 191 361 80.8 345 716 Percentage of the requests served within a certain time (ms) 50% 345 66% 377 75% 408 80% 421 90% 468 95% 521 98% 578 99% 596 100% 716 (longest request)

    Read the article

  • How can I use HAproxy with SSL and get X-Forwarded-For headers AND tell PHP that SSL is in use?

    - by Josh
    I have the following setup: (internet) ---> [ pfSense Box ] /-> [ Apache / PHP server ] [running HAproxy] --+--> [ Apache / PHP server ] +--> [ Apache / PHP server ] \-> [ Apache / PHP server ] For HTTP requests this works great, requests are distributed to my Apache servers just fine. For SSL requests, I had HAproxy distributing the requests using TCP load balancing, and it worked however since HAproxy didn't act as a proxy, it didn't add the X-Forwarded-For HTTP header, and the Apache / PHP servers didn't know the client's real IP address. So, I added stunnel in front of HAproxy, reading that stunnel could add the X-Forwarded-For HTTP header. However, the package which I could install into pfSense does not add this header... also, this apparently kills my ability to use KeepAlive requests, which I would really like to keep. But the biggest issue which killed that idea was that stunnel converted the HTTPS requests into plain HTTP requests, so PHP didn't know that SSL was enabled and tried to redirect to the SSL site. How can I use HAproxy to load balance across a number of SSL servers, allowing those servers to both know the client's IP address and know that SSL is in use? And if possible, how can I do it on my pfSense server? Or should I drop all this and just use nginx?

    Read the article

  • Apache https is slow

    - by raucous12
    Hey, I've set apache up to use SSL with a self signed certificate. With https (KeepAlive on), I can get over 3000 requests per second. However, with https (KeepAlive off), I can only get 13 requests per second. I know there is supposed to be a bit of an overhead, but this seems abnormal. Can anyone suggest how I might go about debugging this. Here is the ab log for https: Server Software: Apache/2.2.3 Server Hostname: 127.0.0.1 Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,DHE-RSA-AES256-SHA,4096,256 Document Path: /hello.html Document Length: 29 bytes Concurrency Level: 5 Time taken for tests: 30.49425 seconds Complete requests: 411 Failed requests: 0 Write errors: 0 Total transferred: 119601 bytes HTML transferred: 11919 bytes Requests per second: 13.68 [#/sec] (mean) Time per request: 365.565 [ms] (mean) Time per request: 73.113 [ms] (mean, across all concurrent requests) Transfer rate: 3.86 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 190 347 74.3 333 716 Processing: 0 14 24.0 1 166 Waiting: 0 11 21.6 0 165 Total: 191 361 80.8 345 716 Percentage of the requests served within a certain time (ms) 50% 345 66% 377 75% 408 80% 421 90% 468 95% 521 98% 578 99% 596 100% 716 (longest request)

    Read the article

  • Uploads fail with shorewall enabled

    - by JamesArmes
    I have an Ubuntu 8.04 server with shorewall 4.0.6 installed. When I try to upload files using FTP, SCP, or cURL the file upload stalls almost immediatly and eventually times out. If I turn off shorewall then the uploads work fine. I don't have any rules that specifically allow FTP and I'm not too concerned with it, but I do need to be able to upload via 22 (SCP) and 80 & 443 (cURL). This is what my rules look like: COMMENT Allow Server to respond to any web (80) and SSL (443) requests ACCEPT net $FW tcp 80 ACCEPT $FW net tcp 80 ACCEPT net $FW tcp 443 ACCEPT $FW net tcp 443 COMMENT Allow Server to respond to SNMPD (161) requests ACCEPT net $FW udp 161 COMMENT Allow Server to respond to MySQL (3306) requests (for MySQL Graphing) ACCEPT net $FW tcp 3306 COMMENT Allow Server to respond to any SSH connection attempts, and to SSH out. SSH/ACCEPT net $FW SSH/ACCEPT $FW net COMMENT Allow Server to make DNS Requests out. DNS/ACCEPT $FW net COMMENT Default "close" anything else. Ping/REJECT net $FW ACCEPT $FW net icmp #LAST LINE -- ADD YOUR ENTRIES BEFORE THIS ONE -- DO NOT REMOVE I expected the top four ACCEPT lines to allow inbound and outbound traffic over 80 and 443 and I expected the two SSH/ACCEPT lines to allow inbound and outbound trffic over 22, including SCP. Any help is greatly appreciated. /etc/shorewall/policy contains the following (all lines above are commented out): # # Allow all connection requests from teh firewall to the internet # $FW net ACCEPT # # Policies for traffic originating from the Internet zone (net) # Drop (ignore) all connection requests from the Internet to the firewall # net all DROP info # THE FOLLOWING POLICY MUST BE LAST # Reject all other connection requests all all REJECT info #LAST LINE -- ADD YOUR ENTRIES ABOVE THIS LINE -- DO NOT REMOVE

    Read the article

  • How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database

    - by Feroz Khan
    How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database using ADO in vc++ application. This is the code I am using to load the file which I used to load a .bmp file BOOL CSaveView::PutECGInDB(CString strFilePath, FieldPtr pFileData) { //Open File CFile fileImage; CFileStatus fileStatus; fileImage.Open(strFilePath,CFile::modeRead); fileImage.GetStatus(fileStatus); //Alocating memory for data ULONG nBytes = (ULONG)fileStatus.m_size; HGLOBAL hGlobal = GlobalAlloc(GPTR,nBytes); LPVOID lpData = GlobalLock(hGlobal); //Putting data into file fileImage.Read(lpData,nBytes); HRESULT hr; _variant_t varChunk; long lngOffset = 0; UCHAR chData; SAFEARRAY FAR *psa = NULL; SAFEARRAYBOUND rgsabound[1]; try { //Create a safe array to store the BYTES rgsabound[0].lLbound = 0; rgsabound[0].cElements = nBytes; psa = SafeArrayCreate(VT_UI1,1,rgsabound); while(lngOffset<(long)nBytes) { chData = ((UCHAR*)lpData)[lngOffset]; hr = SafeArrayPutElement(psa,&lngOffset,&chData); if(hr != S_OK) { return false; } lngOffset++; } lngOffset = 0; //Assign the safe array to a varient varChunk.vt = VT_ARRAY|VT_UI1; varChunk.parray = psa; hr = pFileData-AppendChunk(varChunk); if(hr != S_OK) { return false; } } catch(_com_error &e) { //get info from _com_error _bstr_t bstrSource(e.Source()); _bstr_t bstrDescription(e.Description()); _bstr_t bstrErrorMessage(e.ErrorMessage()); _bstr_t bstrErrorCode(e.Error()); TRACE("Exception thrown for classes generated by #import"); TRACE("\tCode= %08lx\n",(LPCSTR)bstrErrorCode); TRACE("\tCode Meaning = %s\n",(LPCSTR)bstrErrorMessage); TRACE("\tSource = %s\n",(LPCSTR)bstrSource); TRACE("\tDescription = %s\n",(LPCSTR)bstrDescription); } catch(...) { TRACE("Unhandle Exception"); } //Free Memory GlobalUnlock(lpData); return true; } But when I read the same file using Getchunk funcion it gives me all 0,s but the size of the file I get is same as the one uploaded. Your help will be highly appreciated. Thanks in advance.

    Read the article

  • Load a 6 MB binary file in a SQL Server 2005 VARBINARY(MAX) column using ADO/VC++?

    - by Feroz Khan
    How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database using ADO in a VC++ application. This is the code I am using to load the file which I used to load a .bmp file: BOOL CSaveView::PutECGInDB(CString strFilePath, FieldPtr pFileData) { //Open File CFile fileImage; CFileStatus fileStatus; fileImage.Open(strFilePath,CFile::modeRead); fileImage.GetStatus(fileStatus); //Alocating memory for data ULONG nBytes = (ULONG)fileStatus.m_size; HGLOBAL hGlobal = GlobalAlloc(GPTR,nBytes); LPVOID lpData = GlobalLock(hGlobal); //Putting data into file fileImage.Read(lpData,nBytes); HRESULT hr; _variant_t varChunk; long lngOffset = 0; UCHAR chData; SAFEARRAY FAR *psa = NULL; SAFEARRAYBOUND rgsabound[1]; try { //Create a safe array to store the BYTES rgsabound[0].lLbound = 0; rgsabound[0].cElements = nBytes; psa = SafeArrayCreate(VT_UI1,1,rgsabound); while(lngOffset<(long)nBytes) { chData = ((UCHAR*)lpData)[lngOffset]; hr = SafeArrayPutElement(psa,&lngOffset,&chData); if(hr != S_OK) { return false; } lngOffset++; } lngOffset = 0; //Assign the safe array to a varient varChunk.vt = VT_ARRAY|VT_UI1; varChunk.parray = psa; hr = pFileData->AppendChunk(varChunk); if(hr != S_OK) { return false; } } catch(_com_error &e) { //get info from _com_error _bstr_t bstrSource(e.Source()); _bstr_t bstrDescription(e.Description()); _bstr_t bstrErrorMessage(e.ErrorMessage()); _bstr_t bstrErrorCode(e.Error()); TRACE("Exception thrown for classes generated by #import"); TRACE("\tCode= %08lx\n",(LPCSTR)bstrErrorCode); TRACE("\tCode Meaning = %s\n",(LPCSTR)bstrErrorMessage); TRACE("\tSource = %s\n",(LPCSTR)bstrSource); TRACE("\tDescription = %s\n",(LPCSTR)bstrDescription); } catch(...) { TRACE("***Unhandle Exception***"); } //Free Memory GlobalUnlock(lpData); return true; } But when I read the same file using Getchunk function it gives me all 0s but the size of the file I get is same as the one uploaded. Your help will be highly appreciated.

    Read the article

  • Asp.net Mvc - Kigg: Maintain User object in HttpContext.Items between requests.

    - by Pickels
    Hallo, first I want to say that I hope this doesn't look like I am lazy but I have some trouble understanding a piece of code from the following project. http://kigg.codeplex.com/ I was going through the source code and I noticed something that would be usefull for my own little project I am making. In their BaseController they have the following code: private static readonly Type CurrentUserKey = typeof(IUser); public IUser CurrentUser { get { if (!string.IsNullOrEmpty(CurrentUserName)) { IUser user = HttpContext.Items[CurrentUserKey] as IUser; if (user == null) { user = AccountRepository.FindByClaim(CurrentUserName); if (user != null) { HttpContext.Items[CurrentUserKey] = user; } } return user; } return null; } } This isn't an exact copy of the code I adjusted it a little to my needs. This part of the code I still understand. They store their IUser in HttpContext.Items. I guess they do it so that they don't have to call the database eachtime they need the User object. The part that I don't understand is how they maintain this object in between requests. If I understand correctly the HttpContext.Items is a per request cache storage. So after some more digging I found the following code. internal static IDictionary<UnityPerWebRequestLifetimeManager, object> GetInstances(HttpContextBase httpContext) { IDictionary<UnityPerWebRequestLifetimeManager, object> instances; if (httpContext.Items.Contains(Key)) { instances = (IDictionary<UnityPerWebRequestLifetimeManager, object>) httpContext.Items[Key]; } else { lock (httpContext.Items) { if (httpContext.Items.Contains(Key)) { instances = (IDictionary<UnityPerWebRequestLifetimeManager, object>) httpContext.Items[Key]; } else { instances = new Dictionary<UnityPerWebRequestLifetimeManager, object>(); httpContext.Items.Add(Key, instances); } } } return instances; } This is the part where some magic happens that I don't understand. I think they use Unity to do some dependency injection on each request? In my project I am using Ninject and I am wondering how I can get the same result. I guess InRequestScope in Ninject is the same as UnityPerWebRequestLifetimeManager? I am also wondering which class/method they are binding to which interface? Since the HttpContext.Items get destroyed each request how do they prevent losing their user object? Anyway it's kinda a long question so I am gradefull for any push in the right direction. Kind regards, Pickels

    Read the article

  • How to create a simple adf dashboard application with EJB 3.0

    - by Rodrigues, Raphael
    In this month's Oracle Magazine, Frank Nimphius wrote a very good article about an Oracle ADF Faces dashboard application to support persistent user personalization. You can read this entire article clicking here. The idea in this article is to extend the dashboard application. My idea here is to create a similar dashboard application, but instead ADF BC model layer, I'm intending to use EJB3.0. There are just a one small trick here and I'll show you. I'm using the HR usual oracle schema. The steps are: 1. Create a ADF Fusion Application with EJB as a layer model 2. Generate the entities from table (I'm using Department and Employees only) 3. Create a new Session Bean. I called it: HRSessionEJB 4. Create a new method like that: public List getAllDepartmentsHavingEmployees(){ JpaEntityManager jpaEntityManager = (JpaEntityManager)em.getDelegate(); Query query = jpaEntityManager.createNamedQuery("Departments.allDepartmentsHavingEmployees"); JavaBeanResult.setQueryResultClass(query, AggregatedDepartment.class); return query.getResultList(); } 5. In the Departments entity, create a new native query annotation: @Entity @NamedQueries( { @NamedQuery(name = "Departments.findAll", query = "select o from Departments o") }) @NamedNativeQueries({ @NamedNativeQuery(name="Departments.allDepartmentsHavingEmployees", query = "select e.department_id, d.department_name , sum(e.salary), avg(e.salary) , max(e.salary), min(e.salary) from departments d , employees e where d.department_id = e.department_id group by e.department_id, d.department_name")}) public class Departments implements Serializable {...} 6. Create a new POJO called AggregatedDepartment: package oramag.sample.dashboard.model; import java.io.Serializable; import java.math.BigDecimal; public class AggregatedDepartment implements Serializable{ @SuppressWarnings("compatibility:5167698678781240729") private static final long serialVersionUID = 1L; private BigDecimal departmentId; private String departmentName; private BigDecimal sum; private BigDecimal avg; private BigDecimal max; private BigDecimal min; public AggregatedDepartment() { super(); } public AggregatedDepartment(BigDecimal departmentId, String departmentName, BigDecimal sum, BigDecimal avg, BigDecimal max, BigDecimal min) { super(); this.departmentId = departmentId; this.departmentName = departmentName; this.sum = sum; this.avg = avg; this.max = max; this.min = min; } public void setDepartmentId(BigDecimal departmentId) { this.departmentId = departmentId; } public BigDecimal getDepartmentId() { return departmentId; } public void setDepartmentName(String departmentName) { this.departmentName = departmentName; } public String getDepartmentName() { return departmentName; } public void setSum(BigDecimal sum) { this.sum = sum; } public BigDecimal getSum() { return sum; } public void setAvg(BigDecimal avg) { this.avg = avg; } public BigDecimal getAvg() { return avg; } public void setMax(BigDecimal max) { this.max = max; } public BigDecimal getMax() { return max; } public void setMin(BigDecimal min) { this.min = min; } public BigDecimal getMin() { return min; } } 7. Create the util java class called JavaBeanResult. The function of this class is to configure a native SQL query to return POJOs in a single line of code using the utility class. Credits: http://onpersistence.blogspot.com.br/2010/07/eclipselink-jpa-native-constructor.html package oramag.sample.dashboard.model.util; /******************************************************************************* * Copyright (c) 2010 Oracle. All rights reserved. * This program and the accompanying materials are made available under the * terms of the Eclipse Public License v1.0 and Eclipse Distribution License v. 1.0 * which accompanies this distribution. * The Eclipse Public License is available at http://www.eclipse.org/legal/epl-v10.html * and the Eclipse Distribution License is available at * http://www.eclipse.org/org/documents/edl-v10.php. * * @author shsmith ******************************************************************************/ import java.lang.reflect.Constructor; import java.lang.reflect.InvocationTargetException; import java.util.ArrayList; import java.util.List; import javax.persistence.Query; import org.eclipse.persistence.exceptions.ConversionException; import org.eclipse.persistence.internal.helper.ConversionManager; import org.eclipse.persistence.internal.sessions.AbstractRecord; import org.eclipse.persistence.internal.sessions.AbstractSession; import org.eclipse.persistence.jpa.JpaHelper; import org.eclipse.persistence.queries.DatabaseQuery; import org.eclipse.persistence.queries.QueryRedirector; import org.eclipse.persistence.sessions.Record; import org.eclipse.persistence.sessions.Session; /*** * This class is a simple query redirector that intercepts the result of a * native query and builds an instance of the specified JavaBean class from each * result row. The order of the selected columns musts match the JavaBean class * constructor arguments order. * * To configure a JavaBeanResult on a native SQL query use: * JavaBeanResult.setQueryResultClass(query, SomeBeanClass.class); * where query is either a JPA SQL Query or native EclipseLink DatabaseQuery. * * @author shsmith * */ public final class JavaBeanResult implements QueryRedirector { private static final long serialVersionUID = 3025874987115503731L; protected Class resultClass; public static void setQueryResultClass(Query query, Class resultClass) { JavaBeanResult javaBeanResult = new JavaBeanResult(resultClass); DatabaseQuery databaseQuery = JpaHelper.getDatabaseQuery(query); databaseQuery.setRedirector(javaBeanResult); } public static void setQueryResultClass(DatabaseQuery query, Class resultClass) { JavaBeanResult javaBeanResult = new JavaBeanResult(resultClass); query.setRedirector(javaBeanResult); } protected JavaBeanResult(Class resultClass) { this.resultClass = resultClass; } @SuppressWarnings("unchecked") public Object invokeQuery(DatabaseQuery query, Record arguments, Session session) { List results = new ArrayList(); try { Constructor[] constructors = resultClass.getDeclaredConstructors(); Constructor javaBeanClassConstructor = null; // (Constructor) resultClass.getDeclaredConstructors()[0]; Class[] constructorParameterTypes = null; // javaBeanClassConstructor.getParameterTypes(); List rows = (List) query.execute( (AbstractSession) session, (AbstractRecord) arguments); for (Object[] columns : rows) { boolean found = false; for (Constructor constructor : constructors) { javaBeanClassConstructor = constructor; constructorParameterTypes = javaBeanClassConstructor.getParameterTypes(); if (columns.length == constructorParameterTypes.length) { found = true; break; } // if (columns.length != constructorParameterTypes.length) { // throw new ColumnParameterNumberMismatchException( // resultClass); // } } if (!found) throw new ColumnParameterNumberMismatchException( resultClass); Object[] constructorArgs = new Object[constructorParameterTypes.length]; for (int j = 0; j < columns.length; j++) { Object columnValue = columns[j]; Class parameterType = constructorParameterTypes[j]; // convert the column value to the correct type--if possible constructorArgs[j] = ConversionManager.getDefaultManager() .convertObject(columnValue, parameterType); } results.add(javaBeanClassConstructor.newInstance(constructorArgs)); } } catch (ConversionException e) { throw new ColumnParameterMismatchException(e); } catch (IllegalArgumentException e) { throw new ColumnParameterMismatchException(e); } catch (InstantiationException e) { throw new ColumnParameterMismatchException(e); } catch (IllegalAccessException e) { throw new ColumnParameterMismatchException(e); } catch (InvocationTargetException e) { throw new ColumnParameterMismatchException(e); } return results; } public final class ColumnParameterMismatchException extends RuntimeException { private static final long serialVersionUID = 4752000720859502868L; public ColumnParameterMismatchException(Throwable t) { super( "Exception while processing query results-ensure column order matches constructor parameter order", t); } } public final class ColumnParameterNumberMismatchException extends RuntimeException { private static final long serialVersionUID = 1776794744797667755L; public ColumnParameterNumberMismatchException(Class clazz) { super( "Number of selected columns does not match number of constructor arguments for: " + clazz.getName()); } } } 8. Create the DataControl and a jsf or jspx page 9. Drag allDepartmentsHavingEmployees from DataControl and drop in your page 10. Choose Graph > Type: Bar (Normal) > any layout 11. In the wizard screen, Bars label, adds: sum, avg, max, min. In the X Axis label, adds: departmentName, and click in OK button 12. Run the page, the result is showed below: You can download the workspace here . It was using the latest jdeveloper version 11.1.2.2.

    Read the article

  • Nginx no longer servers uwsgi application behind HAProxy - Looks for static file instead

    - by Ralph
    We implemented our web application using web2py. It consists of several modules offering a REST API at various resources (e.g. /dids, /replicas, ...). The API is used by clients implementing requests.py. My problem is that our web app works fine if it's behind HAProxy and hosted by Apache using mod_wsgi. It also works fine if the clients interact with nginx directly. It doesn't work though when using HAProxy in front of nginx. My guess is that HAProxy somehow modifies the request and thus nginx behaves differently i.e. looking for a static file instead of calling the WSGI container. Unfortunately I can't figure out what's exactly going (wr)on(g). Here are the relevant config sections of these three component's config files. At least I guess they are interesting. If you miss anything, please let me know. 1) haproxy.conf frontend app-lb bind loadbalancer:443 ssl crt /etc/grid-security/hostcertkey.pem default_backend nginx-servers mode http backend nginx-servers balance leastconn option forwardfor server nginx-01 nginx-server-int-01.domain.com:80 check 2) nginx.conf: sendfile off; #tcp_nopush on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; server { server_name nginx-server-int-01.domain.com; root /path/to/app/; location / { uwsgi_pass unix:///tmp/app.sock; include uwsgi_params; uwsgi_read_timeout 600; # Requests can run for a serious long time } 3) uwsgi.ini [uwsgi] chdir = /path/to/app/ chmod-socket = 777 no-default-app = True socket = /tmp/app.sock manage-script-name = True mount = /dids=did.py mount = /replicas=replica.py callable = application Now when I let my clients go against nginx-server-int-01.domain.com everything is fine. In the access.log of nginx lines like these are appearing: 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/user.ogueta/cnt_mc12_8TeV.16304.stream_name_too_long.other.notype.004202218365415e990b9997ea859f20.user/dids HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5282 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5094 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 528 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "GET /dids/mc13_14TeV/dids/search?project=mc13_14TeV&stream_name=%2Adummy&type=dataset&datatype=NTUP_SMDYMUMU HTTP/1.1" 401 73 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /replicas/list HTTP/1.1" 200 713 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" But when I switch the clients to go against HAProxy (loadbalancer.domain.com:443), the error.log of nginx shows lines like these: 2014/08/23 01:26:01 [error] 1705#0: *21231 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21232 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21233 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21234 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21235 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer" 2014/08/23 01:26:02 [error] 1705#0: *21238 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21239 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21242 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21244 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" As you can see, that request looks the same, only the client IP changed, from the client's host to the one from loadbalancer.domain.com. But due to what ever reasons ngxin seems to assume that it is a static file to be served which eventually results in the file not found message. I searched the web for multiple hours already, but without much luck so far. Any help is very much appreciated. Cheers, Ralph

    Read the article

  • Converting Encrypted Values

    - by Johnm
    Your database has been protecting sensitive data at rest using the cell-level encryption features of SQL Server for quite sometime. The employees in the auditing department have been inviting you to their after-work gatherings and buying you drinks. Thousands of customers implicitly include you in their prayers of thanks giving as their identities remain safe in your company's database. The cipher text resting snuggly in a column of the varbinary data type is great for security; but it can create some interesting challenges when interacting with other data types such as the XML data type. The XML data type is one that is often used as a message type for the Service Broker feature of SQL Server. It also can be an interesting data type to capture for auditing or integrating with external systems. The challenge that cipher text presents is that the need for decryption remains even after it has experienced its XML metamorphosis. Quite an interesting challenge nonetheless; but fear not. There is a solution. To simulate this scenario, we first will want to create a plain text value for us to encrypt. We will do this by creating a variable to store our plain text value: -- set plain text value DECLARE @PlainText NVARCHAR(255); SET @PlainText = 'This is plain text to encrypt'; The next step will be to create a variable that will store the cipher text that is generated from the encryption process. We will populate this variable by using a pre-defined symmetric key and certificate combination: -- encrypt plain text value DECLARE @CipherText VARBINARY(MAX); OPEN SYMMETRIC KEY SymKey     DECRYPTION BY CERTIFICATE SymCert     WITH PASSWORD='mypassword2010';     SET @CipherText = EncryptByKey                          (                            Key_GUID('SymKey'),                            @PlainText                           ); CLOSE ALL SYMMETRIC KEYS; The value of our newly generated cipher text is 0x006E12933CBFB0469F79ABCC79A583--. This will be important as we reference our cipher text later in this post. Our final step in preparing our scenario is to create a table variable to simulate the existence of a table that contains a column used to hold encrypted values. Once this table variable has been created, populate the table variable with the newly generated cipher text: -- capture value in table variable DECLARE @tbl TABLE (EncVal varbinary(MAX)); INSERT INTO @tbl (EncVal) VALUES (@CipherText); We are now ready to experience the challenge of capturing our encrypted column in an XML data type using the FOR XML clause: -- capture set in xml DECLARE @xml XML; SET @xml = (SELECT               EncVal             FROM @tbl AS MYTABLE             FOR XML AUTO, BINARY BASE64, ROOT('root')); If you add the SELECT @XML statement at the end of this portion of the code you will see the contents of the XML data in its raw format: <root>   <MYTABLE EncVal="AG4Skzy/sEafeavMeaWDBwEAAACE--" /> </root> Strangely, the value that is captured appears nothing like the value that was created through the encryption process. The result being that when this XML is converted into a readable data set the encrypted value will not be able to be decrypted, even with access to the symmetric key and certificate used to perform the decryption. An immediate thought might be to convert the varbinary data type to either a varchar or nvarchar before creating the XML data. This approach makes good sense. The code for this might look something like the following: -- capture set in xml DECLARE @xml XML; SET @xml = (SELECT              CONVERT(NVARCHAR(MAX),EncVal) AS EncVal             FROM @tbl AS MYTABLE             FOR XML AUTO, BINARY BASE64, ROOT('root')); However, this results in the following error: Msg 9420, Level 16, State 1, Line 26 XML parsing: line 1, character 37, illegal xml character A quick query that returns CONVERT(NVARCHAR(MAX),EncVal) reveals that the value that is causing the error looks like something off of a genuine Chinese menu. While this situation does present us with one of those spine-tingling, expletive-generating challenges, rest assured that this approach is on the right track. With the addition of the "style" argument to the CONVERT method, our solution is at hand. When dealing with converting varbinary data types we have three styles available to us: - The first is to not include the style parameter, or use the value of "0". As we see, this style will not work for us. - The second option is to use the value of "1" will keep our varbinary value including the "0x" prefix. In our case, the value will be 0x006E12933CBFB0469F79ABCC79A583-- - The third option is to use the value of "2" which will chop the "0x" prefix off of our varbinary value. In our case, the value will be 006E12933CBFB0469F79ABCC79A583-- Since we will want to convert this back to varbinary when reading this value from the XML data we will want the "0x" prefix, so we will want to change our code as follows: -- capture set in xml DECLARE @xml XML; SET @xml = (SELECT              CONVERT(NVARCHAR(MAX),EncVal,1) AS EncVal             FROM @tbl AS MYTABLE             FOR XML AUTO, BINARY BASE64, ROOT('root')); Once again, with the inclusion of the SELECT @XML statement at the end of this portion of the code you will see the contents of the XML data in its raw format: <root>   <MYTABLE EncVal="0x006E12933CBFB0469F79ABCC79A583--" /> </root> Nice! We are now cooking with gas. To continue our scenario, we will want to parse the XML data into a data set so that we can glean our freshly captured cipher text. Once we have our cipher text snagged we will capture it into a variable so that it can be used during decryption: -- read back xml DECLARE @hdoc INT; DECLARE @EncVal NVARCHAR(MAX); EXEC sp_xml_preparedocument @hDoc OUTPUT, @xml; SELECT @EncVal = EncVal FROM OPENXML (@hdoc, '/root/MYTABLE') WITH ([EncVal] VARBINARY(MAX) '@EncVal'); EXEC sp_xml_removedocument @hDoc; Finally, the decryption of our cipher text using the DECRYPTBYKEYAUTOCERT method and the certificate utilized to perform the encryption earlier in our exercise: SELECT     CONVERT(NVARCHAR(MAX),                     DecryptByKeyAutoCert                          (                            CERT_ID('AuditLogCert'),                            N'mypassword2010',                            @EncVal                           )                     ) EncVal; Ah yes, another hurdle presents itself! The decryption produced the value of NULL which in cryptography means that either you don't have permissions to decrypt the cipher text or something went wrong during the decryption process (ok, sometimes the value is actually NULL; but not in this case). As we see, the @EncVal variable is an nvarchar data type. The third parameter of the DECRYPTBYKEYAUTOCERT method requires a varbinary value. Therefore we will need to utilize our handy-dandy CONVERT method: SELECT     CONVERT(NVARCHAR(MAX),                     DecryptByKeyAutoCert                          (                             CERT_ID('AuditLogCert'),                             N'mypassword2010',                             CONVERT(VARBINARY(MAX),@EncVal)                           )                     ) EncVal; Oh, almost. The result remains NULL despite our conversion to the varbinary data type. This is due to the creation of an varbinary value that does not reflect the actual value of our @EncVal variable; but rather a varbinary conversion of the variable itself. In this case, something like 0x3000780030003000360045003--. Considering the "style" parameter got us past XML challenge, we will want to consider its power for this challenge as well. Knowing that the value of "1" will provide us with the actual value including the "0x", we will opt to utilize that value in this case: SELECT     CONVERT(NVARCHAR(MAX),                     DecryptByKeyAutoCert                          (                            CERT_ID('SymCert'),                            N'mypassword2010',                            CONVERT(VARBINARY(MAX),@EncVal,1)                           )                     ) EncVal; Bingo, we have success! We have discovered what happens with varbinary data when captured as XML data. We have figured out how to make this data useful post-XML-ification. Best of all we now have a choice in after-work parties now that our very happy client who depends on our XML based interface invites us for dinner in celebration. All thanks to the effective use of the style parameter.

    Read the article

  • LSI 9285-8e and Supermicro SC837E26-RJBOD1 duplicate enclosure ID and slot numbers

    - by Andy Shinn
    I am working with 2 x Supermicro SC837E26-RJBOD1 chassis connected to a single LSI 9285-8e card in a Supermicro 1U host. There are 28 drives in each chassis for a total of 56 drives in 28 RAID1 mirrors. The problem I am running in to is that there are duplicate slots for the 2 chassis (the slots list twice and only go from 0 to 27). All the drives also show the same enclosure ID (ID 36). However, MegaCLI -encinfo lists the 2 enclosures correctly (ID 36 and ID 65). My question is, why would this happen? Is there an option I am missing to use 2 enclosures effectively? This is blocking me rebuilding a drive that failed in slot 11 since I can only specify enclosure and slot as parameters to replace a drive. When I do this, it picks the wrong slot 11 (device ID 46 instead of device ID 19). Adapter #1 is the LSI 9285-8e, adapter #0 (which I removed due to space limitations) is the onboard LSI. Adapter information: Adapter #1 ============================================================================== Versions ================ Product Name : LSI MegaRAID SAS 9285-8e Serial No : SV12704804 FW Package Build: 23.1.1-0004 Mfg. Data ================ Mfg. Date : 06/30/11 Rework Date : 00/00/00 Revision No : 00A Battery FRU : N/A Image Versions in Flash: ================ BIOS Version : 5.25.00_4.11.05.00_0x05040000 WebBIOS Version : 6.1-20-e_20-Rel Preboot CLI Version: 05.01-04:#%00001 FW Version : 3.140.15-1320 NVDATA Version : 2.1106.03-0051 Boot Block Version : 2.04.00.00-0001 BOOT Version : 06.253.57.219 Pending Images in Flash ================ None PCI Info ================ Vendor Id : 1000 Device Id : 005b SubVendorId : 1000 SubDeviceId : 9285 Host Interface : PCIE ChipRevision : B0 Number of Frontend Port: 0 Device Interface : PCIE Number of Backend Port: 8 Port : Address 0 5003048000ee8e7f 1 5003048000ee8a7f 2 0000000000000000 3 0000000000000000 4 0000000000000000 5 0000000000000000 6 0000000000000000 7 0000000000000000 HW Configuration ================ SAS Address : 500605b0038f9210 BBU : Present Alarm : Present NVRAM : Present Serial Debugger : Present Memory : Present Flash : Present Memory Size : 1024MB TPM : Absent On board Expander: Absent Upgrade Key : Absent Temperature sensor for ROC : Present Temperature sensor for controller : Absent ROC temperature : 70 degree Celcius Settings ================ Current Time : 18:24:36 3/13, 2012 Predictive Fail Poll Interval : 300sec Interrupt Throttle Active Count : 16 Interrupt Throttle Completion : 50us Rebuild Rate : 30% PR Rate : 30% BGI Rate : 30% Check Consistency Rate : 30% Reconstruction Rate : 30% Cache Flush Interval : 4s Max Drives to Spinup at One Time : 2 Delay Among Spinup Groups : 12s Physical Drive Coercion Mode : Disabled Cluster Mode : Disabled Alarm : Enabled Auto Rebuild : Enabled Battery Warning : Enabled Ecc Bucket Size : 15 Ecc Bucket Leak Rate : 1440 Minutes Restore HotSpare on Insertion : Disabled Expose Enclosure Devices : Enabled Maintain PD Fail History : Enabled Host Request Reordering : Enabled Auto Detect BackPlane Enabled : SGPIO/i2c SEP Load Balance Mode : Auto Use FDE Only : No Security Key Assigned : No Security Key Failed : No Security Key Not Backedup : No Default LD PowerSave Policy : Controller Defined Maximum number of direct attached drives to spin up in 1 min : 10 Any Offline VD Cache Preserved : No Allow Boot with Preserved Cache : No Disable Online Controller Reset : No PFK in NVRAM : No Use disk activity for locate : No Capabilities ================ RAID Level Supported : RAID0, RAID1, RAID5, RAID6, RAID00, RAID10, RAID50, RAID60, PRL 11, PRL 11 with spanning, SRL 3 supported, PRL11-RLQ0 DDF layout with no span, PRL11-RLQ0 DDF layout with span Supported Drives : SAS, SATA Allowed Mixing: Mix in Enclosure Allowed Mix of SAS/SATA of HDD type in VD Allowed Status ================ ECC Bucket Count : 0 Limitations ================ Max Arms Per VD : 32 Max Spans Per VD : 8 Max Arrays : 128 Max Number of VDs : 64 Max Parallel Commands : 1008 Max SGE Count : 60 Max Data Transfer Size : 8192 sectors Max Strips PerIO : 42 Max LD per array : 16 Min Strip Size : 8 KB Max Strip Size : 1.0 MB Max Configurable CacheCade Size: 0 GB Current Size of CacheCade : 0 GB Current Size of FW Cache : 887 MB Device Present ================ Virtual Drives : 28 Degraded : 0 Offline : 0 Physical Devices : 59 Disks : 56 Critical Disks : 0 Failed Disks : 0 Supported Adapter Operations ================ Rebuild Rate : Yes CC Rate : Yes BGI Rate : Yes Reconstruct Rate : Yes Patrol Read Rate : Yes Alarm Control : Yes Cluster Support : No BBU : No Spanning : Yes Dedicated Hot Spare : Yes Revertible Hot Spares : Yes Foreign Config Import : Yes Self Diagnostic : Yes Allow Mixed Redundancy on Array : No Global Hot Spares : Yes Deny SCSI Passthrough : No Deny SMP Passthrough : No Deny STP Passthrough : No Support Security : No Snapshot Enabled : No Support the OCE without adding drives : Yes Support PFK : Yes Support PI : No Support Boot Time PFK Change : Yes Disable Online PFK Change : No PFK TrailTime Remaining : 0 days 0 hours Support Shield State : Yes Block SSD Write Disk Cache Change: Yes Supported VD Operations ================ Read Policy : Yes Write Policy : Yes IO Policy : Yes Access Policy : Yes Disk Cache Policy : Yes Reconstruction : Yes Deny Locate : No Deny CC : No Allow Ctrl Encryption: No Enable LDBBM : No Support Breakmirror : No Power Savings : Yes Supported PD Operations ================ Force Online : Yes Force Offline : Yes Force Rebuild : Yes Deny Force Failed : No Deny Force Good/Bad : No Deny Missing Replace : No Deny Clear : No Deny Locate : No Support Temperature : Yes Disable Copyback : No Enable JBOD : No Enable Copyback on SMART : No Enable Copyback to SSD on SMART Error : Yes Enable SSD Patrol Read : No PR Correct Unconfigured Areas : Yes Enable Spin Down of UnConfigured Drives : Yes Disable Spin Down of hot spares : No Spin Down time : 30 T10 Power State : Yes Error Counters ================ Memory Correctable Errors : 0 Memory Uncorrectable Errors : 0 Cluster Information ================ Cluster Permitted : No Cluster Active : No Default Settings ================ Phy Polarity : 0 Phy PolaritySplit : 0 Background Rate : 30 Strip Size : 64kB Flush Time : 4 seconds Write Policy : WB Read Policy : Adaptive Cache When BBU Bad : Disabled Cached IO : No SMART Mode : Mode 6 Alarm Disable : Yes Coercion Mode : None ZCR Config : Unknown Dirty LED Shows Drive Activity : No BIOS Continue on Error : No Spin Down Mode : None Allowed Device Type : SAS/SATA Mix Allow Mix in Enclosure : Yes Allow HDD SAS/SATA Mix in VD : Yes Allow SSD SAS/SATA Mix in VD : No Allow HDD/SSD Mix in VD : No Allow SATA in Cluster : No Max Chained Enclosures : 16 Disable Ctrl-R : Yes Enable Web BIOS : Yes Direct PD Mapping : No BIOS Enumerate VDs : Yes Restore Hot Spare on Insertion : No Expose Enclosure Devices : Yes Maintain PD Fail History : Yes Disable Puncturing : No Zero Based Enclosure Enumeration : No PreBoot CLI Enabled : Yes LED Show Drive Activity : Yes Cluster Disable : Yes SAS Disable : No Auto Detect BackPlane Enable : SGPIO/i2c SEP Use FDE Only : No Enable Led Header : No Delay during POST : 0 EnableCrashDump : No Disable Online Controller Reset : No EnableLDBBM : No Un-Certified Hard Disk Drives : Allow Treat Single span R1E as R10 : No Max LD per array : 16 Power Saving option : Don't Auto spin down Configured Drives Max power savings option is not allowed for LDs. Only T10 power conditions are to be used. Default spin down time in minutes: 30 Enable JBOD : No TTY Log In Flash : No Auto Enhanced Import : No BreakMirror RAID Support : No Disable Join Mirror : No Enable Shield State : Yes Time taken to detect CME : 60s Exit Code: 0x00 Enclosure information: # /opt/MegaRAID/MegaCli/MegaCli64 -encinfo -a1 Number of enclosures on adapter 1 -- 3 Enclosure 0: Device ID : 36 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port B Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 65 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11820 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 48 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 1: Device ID : 65 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port A Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 36 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11760 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 47 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 2: Device ID : 252 Number of Slots : 8 Number of Power Supplies : 0 Number of Fans : 0 Number of Temperature Sensors : 0 Number of Alarms : 0 Number of SIM Modules : 1 Number of Physical Drives : 0 Status : Normal Position : 1 Connector Name : Unavailable Enclosure type : SGPIO Failed in first Inquiry commnad FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : Unavailable Inquiry data : Vendor Identification : LSI Product Identification : SGPIO Product Revision Level : N/A Vendor Specific : Exit Code: 0x00 Now, notice that each slot 11 device shows an enclosure ID of 36, I think this is where the discrepancy happens. One should be 36. But the other should be on enclosure 65. Drives in slot 11: Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 5, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 48 WWN: Sequence Number: 11 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : YES Device Firmware Level: A5C0 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8a53 Connected Port Number: 1(path0) Inquiry Data: MJ1311YNG6YYXAHitachi HDS5C3030ALA630 MEAOA5C0 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 19, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 19 WWN: Sequence Number: 4 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : NO Device Firmware Level: A580 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8e53 Connected Port Number: 0(path0) Inquiry Data: MJ1313YNG1VA5CHitachi HDS5C3030ALA630 MEAOA580 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Update 06/28/12: I finally have some new information about (what we think) the root cause of this problem so I thought I would share. After getting in contact with a very knowledgeable Supermicro tech, they provided us with a tool called Xflash (doesn't appear to be readily available on their FTP). When we gathered some information using this utility, my colleague found something very strange: root@mogile2 test]# ./xflash.dat -i get avail Initializing Interface. Expander: SAS2X36 (SAS2x36) 1) SAS2X36 (SAS2x36) (50030480:00EE917F) (0.0.0.0) 2) SAS2X36 (SAS2x36) (50030480:00E9D67F) (0.0.0.0) 3) SAS2X36 (SAS2x36) (50030480:0112D97F) (0.0.0.0) This lists the connected enclosures. You see the 3 connected (we have since added a 3rd and a 4th which is not yet showing up) with their respective SAS address / WWN (50030480:00EE917F). Now we can use this address to get information on the individual enclosures: [root@mogile2 test]# ./xflash.dat -i 5003048000EE917F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00EE917F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 5003048000E9D67F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00E9D67F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 500304800112D97F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:0112D97F Enclosure Logical Id: 50030480:0112D97F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 Did you catch it? The first 2 enclosures logical ID is partially masked out where the 3rd one (which has a correct unique enclosure ID) is not. We pointed this out to Supermicro and were able to confirm that this address is supposed to be set during manufacturing and there was a problem with a certain batch of these enclosures where the logical ID was not set. We believe that the RAID controller is determining the ID based on the logical ID and since our first 2 enclosures have the same logical ID, they get the same enclosure ID. We also confirmed that 0000007F is the default which comes from LSI as an ID. The next pointer that helps confirm this could be a manufacturing problem with a run of JBODs is the fact that all 6 of the enclosures that have this problem begin with 00E. I believe that between 00E8 and 00EE Supermicro forgot to program the logical IDs correctly and neglected to recall or fix the problem post production. Fortunately for us, there is a tool to manage the WWN and logical ID of the devices from Supermicro: ftp://ftp.supermicro.com/utility/ExpanderXtools_Lite/. Our next step is to schedule a shutdown of these JBODs (after data migration) and reprogram the logical ID and see if it solves the problem. Update 06/28/12 #2: I just discovered this FAQ at Supermicro while Google searching for "lsi 0000007f": http://www.supermicro.com/support/faqs/faq.cfm?faq=11805. I still don't understand why, in the last several times we contacted Supermicro, they would have never directed us to this article :\

    Read the article

  • Ball bouncing at a certain angle and efficiency computations

    - by X Y
    I would like to make a pong game with a small twist (for now). Every time the ball bounces off one of the paddles i want it to be under a certain angle (between a min and a max). I simply can't wrap my head around how to actually do it (i have some thoughts and such but i simply cannot implement them properly - i feel i'm overcomplicating things). Here's an image with a small explanation . One other problem would be that the conditions for bouncing have to be different for every edge. For example, in the picture, on the two small horizontal edges i do not want a perfectly vertical bounce when in the middle of the edge but rather a constant angle (pi/4 maybe) in either direction depending on the collision point (before the middle of the edge, or after). All of my collisions are done with the Separating Axes Theorem (and seem to work fine). I'm looking for something efficient because i want to add a lot of things later on (maybe polygons with many edges and such). So i need to keep to a minimum the amount of checking done every frame. The collision algorithm begins testing whenever the bounding boxes of the paddle and the ball intersect. Is there something better to test for possible collisions every frame? (more efficient in the long run,with many more objects etc, not necessarily easy to code). I'm going to post the code for my game: Paddle Class public class Paddle : Microsoft.Xna.Framework.DrawableGameComponent { #region Private Members private SpriteBatch spriteBatch; private ContentManager contentManager; private bool keybEnabled; private bool isLeftPaddle; private Texture2D paddleSprite; private Vector2 paddlePosition; private float paddleSpeedY; private Vector2 paddleScale = new Vector2(1f, 1f); private const float DEFAULT_Y_SPEED = 150; private Vector2[] Normals2Edges; private Vector2[] Vertices = new Vector2[4]; private List<Vector2> lst = new List<Vector2>(); private Vector2 Edge; #endregion #region Properties public float Speed { get {return paddleSpeedY; } set { paddleSpeedY = value; } } public Vector2[] Normal2EdgesVector { get { NormalsToEdges(this.isLeftPaddle); return Normals2Edges; } } public Vector2[] VertexVector { get { return Vertices; } } public Vector2 Scale { get { return paddleScale; } set { paddleScale = value; NormalsToEdges(this.isLeftPaddle); } } public float X { get { return paddlePosition.X; } set { paddlePosition.X = value; } } public float Y { get { return paddlePosition.Y; } set { paddlePosition.Y = value; } } public float Width { get { return (Scale.X == 1f ? (float)paddleSprite.Width : paddleSprite.Width * Scale.X); } } public float Height { get { return ( Scale.Y==1f ? (float)paddleSprite.Height : paddleSprite.Height*Scale.Y ); } } public Texture2D GetSprite { get { return paddleSprite; } } public Rectangle Boundary { get { return new Rectangle((int)paddlePosition.X, (int)paddlePosition.Y, (int)this.Width, (int)this.Height); } } public bool KeyboardEnabled { get { return keybEnabled; } } #endregion private void NormalsToEdges(bool isLeftPaddle) { Normals2Edges = null; Edge = Vector2.Zero; lst.Clear(); for (int i = 0; i < Vertices.Length; i++) { Edge = Vertices[i + 1 == Vertices.Length ? 0 : i + 1] - Vertices[i]; if (Edge != Vector2.Zero) { Edge.Normalize(); //outer normal to edge !! (origin in top-left) lst.Add(new Vector2(Edge.Y, -Edge.X)); } } Normals2Edges = lst.ToArray(); } public float[] ProjectPaddle(Vector2 axis) { if (Vertices.Length == 0 || axis == Vector2.Zero) return (new float[2] { 0, 0 }); float min, max; min = Vector2.Dot(axis, Vertices[0]); max = min; for (int i = 1; i < Vertices.Length; i++) { float p = Vector2.Dot(axis, Vertices[i]); if (p < min) min = p; else if (p > max) max = p; } return (new float[2] { min, max }); } public Paddle(Game game, bool isLeftPaddle, bool enableKeyboard = true) : base(game) { contentManager = new ContentManager(game.Services); keybEnabled = enableKeyboard; this.isLeftPaddle = isLeftPaddle; } public void setPosition(Vector2 newPos) { X = newPos.X; Y = newPos.Y; } public override void Initialize() { base.Initialize(); this.Speed = DEFAULT_Y_SPEED; X = 0; Y = 0; NormalsToEdges(this.isLeftPaddle); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); paddleSprite = contentManager.Load<Texture2D>(@"Content\pongBar"); } public override void Update(GameTime gameTime) { //vertices array Vertices[0] = this.paddlePosition; Vertices[1] = this.paddlePosition + new Vector2(this.Width, 0); Vertices[2] = this.paddlePosition + new Vector2(this.Width, this.Height); Vertices[3] = this.paddlePosition + new Vector2(0, this.Height); // Move paddle, but don't allow movement off the screen if (KeyboardEnabled) { float moveDistance = Speed * (float)gameTime.ElapsedGameTime.TotalSeconds; KeyboardState newKeyState = Keyboard.GetState(); if (newKeyState.IsKeyDown(Keys.Down) && Y + paddleSprite.Height + moveDistance <= Game.GraphicsDevice.Viewport.Height) { Y += moveDistance; } else if (newKeyState.IsKeyDown(Keys.Up) && Y - moveDistance >= 0) { Y -= moveDistance; } } else { if (this.Y + this.Height > this.GraphicsDevice.Viewport.Height) { this.Y = this.Game.GraphicsDevice.Viewport.Height - this.Height - 1; } } base.Update(gameTime); } public override void Draw(GameTime gameTime) { spriteBatch.Begin(SpriteSortMode.Texture,null); spriteBatch.Draw(paddleSprite, paddlePosition, null, Color.White, 0f, Vector2.Zero, Scale, SpriteEffects.None, 0); spriteBatch.End(); base.Draw(gameTime); } } Ball Class public class Ball : Microsoft.Xna.Framework.DrawableGameComponent { #region Private Members private SpriteBatch spriteBatch; private ContentManager contentManager; private const float DEFAULT_SPEED = 50; private float speedIncrement = 0; private Vector2 ballScale = new Vector2(1f, 1f); private const float INCREASE_SPEED = 50; private Texture2D ballSprite; //initial texture private Vector2 ballPosition; //position private Vector2 centerOfBall; //center coords private Vector2 ballSpeed = new Vector2(DEFAULT_SPEED, DEFAULT_SPEED); //speed #endregion #region Properties public float DEFAULTSPEED { get { return DEFAULT_SPEED; } } public Vector2 ballCenter { get { return centerOfBall; } } public Vector2 Scale { get { return ballScale; } set { ballScale = value; } } public float SpeedX { get { return ballSpeed.X; } set { ballSpeed.X = value; } } public float SpeedY { get { return ballSpeed.Y; } set { ballSpeed.Y = value; } } public float X { get { return ballPosition.X; } set { ballPosition.X = value; } } public float Y { get { return ballPosition.Y; } set { ballPosition.Y = value; } } public Texture2D GetSprite { get { return ballSprite; } } public float Width { get { return (Scale.X == 1f ? (float)ballSprite.Width : ballSprite.Width * Scale.X); } } public float Height { get { return (Scale.Y == 1f ? (float)ballSprite.Height : ballSprite.Height * Scale.Y); } } public float SpeedIncreaseIncrement { get { return speedIncrement; } set { speedIncrement = value; } } public Rectangle Boundary { get { return new Rectangle((int)ballPosition.X, (int)ballPosition.Y, (int)this.Width, (int)this.Height); } } #endregion public Ball(Game game) : base(game) { contentManager = new ContentManager(game.Services); } public void Reset() { ballSpeed.X = DEFAULT_SPEED; ballSpeed.Y = DEFAULT_SPEED; ballPosition.X = Game.GraphicsDevice.Viewport.Width / 2 - ballSprite.Width / 2; ballPosition.Y = Game.GraphicsDevice.Viewport.Height / 2 - ballSprite.Height / 2; } public void SpeedUp() { if (ballSpeed.Y < 0) ballSpeed.Y -= (INCREASE_SPEED + speedIncrement); else ballSpeed.Y += (INCREASE_SPEED + speedIncrement); if (ballSpeed.X < 0) ballSpeed.X -= (INCREASE_SPEED + speedIncrement); else ballSpeed.X += (INCREASE_SPEED + speedIncrement); } public float[] ProjectBall(Vector2 axis) { if (axis == Vector2.Zero) return (new float[2] { 0, 0 }); float min, max; min = Vector2.Dot(axis, this.ballCenter) - this.Width/2; //center - radius max = min + this.Width; //center + radius return (new float[2] { min, max }); } public void ChangeHorzDirection() { ballSpeed.X *= -1; } public void ChangeVertDirection() { ballSpeed.Y *= -1; } public override void Initialize() { base.Initialize(); ballPosition.X = Game.GraphicsDevice.Viewport.Width / 2 - ballSprite.Width / 2; ballPosition.Y = Game.GraphicsDevice.Viewport.Height / 2 - ballSprite.Height / 2; } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); ballSprite = contentManager.Load<Texture2D>(@"Content\ball"); } public override void Update(GameTime gameTime) { if (this.Y < 1 || this.Y > GraphicsDevice.Viewport.Height - this.Height - 1) this.ChangeVertDirection(); centerOfBall = new Vector2(ballPosition.X + this.Width / 2, ballPosition.Y + this.Height / 2); base.Update(gameTime); } public override void Draw(GameTime gameTime) { spriteBatch.Begin(); spriteBatch.Draw(ballSprite, ballPosition, null, Color.White, 0f, Vector2.Zero, Scale, SpriteEffects.None, 0); spriteBatch.End(); base.Draw(gameTime); } } Main game class public class gameStart : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; public gameStart() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; this.Window.Title = "Pong game"; } protected override void Initialize() { ball = new Ball(this); paddleLeft = new Paddle(this,true,false); paddleRight = new Paddle(this,false,true); Components.Add(ball); Components.Add(paddleLeft); Components.Add(paddleRight); this.Window.AllowUserResizing = false; this.IsMouseVisible = true; this.IsFixedTimeStep = false; this.isColliding = false; base.Initialize(); } #region MyPrivateStuff private Ball ball; private Paddle paddleLeft, paddleRight; private int[] bit = { -1, 1 }; private Random rnd = new Random(); private int updates = 0; enum nrPaddle { None, Left, Right }; private nrPaddle PongBar = nrPaddle.None; private ArrayList Axes = new ArrayList(); private Vector2 MTV; //minimum translation vector private bool isColliding; private float overlap; //smallest distance after projections private Vector2 overlapAxis; //axis of overlap #endregion protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); paddleLeft.setPosition(new Vector2(0, this.GraphicsDevice.Viewport.Height / 2 - paddleLeft.Height / 2)); paddleRight.setPosition(new Vector2(this.GraphicsDevice.Viewport.Width - paddleRight.Width, this.GraphicsDevice.Viewport.Height / 2 - paddleRight.Height / 2)); paddleLeft.Scale = new Vector2(1f, 2f); //scale left paddle } private bool ShapesIntersect(Paddle paddle, Ball ball) { overlap = 1000000f; //large value overlapAxis = Vector2.Zero; MTV = Vector2.Zero; foreach (Vector2 ax in Axes) { float[] pad = paddle.ProjectPaddle(ax); //pad0 = min, pad1 = max float[] circle = ball.ProjectBall(ax); //circle0 = min, circle1 = max if (pad[1] <= circle[0] || circle[1] <= pad[0]) { return false; } if (pad[1] - circle[0] < circle[1] - pad[0]) { if (Math.Abs(overlap) > Math.Abs(-pad[1] + circle[0])) { overlap = -pad[1] + circle[0]; overlapAxis = ax; } } else { if (Math.Abs(overlap) > Math.Abs(circle[1] - pad[0])) { overlap = circle[1] - pad[0]; overlapAxis = ax; } } } if (overlapAxis != Vector2.Zero) { MTV = overlapAxis * overlap; } return true; } protected override void Update(GameTime gameTime) { updates += 1; float ftime = 5 * (float)gameTime.ElapsedGameTime.TotalSeconds; if (updates == 1) { isColliding = false; int Xrnd = bit[Convert.ToInt32(rnd.Next(0, 2))]; int Yrnd = bit[Convert.ToInt32(rnd.Next(0, 2))]; ball.SpeedX = Xrnd * ball.SpeedX; ball.SpeedY = Yrnd * ball.SpeedY; ball.X += ftime * ball.SpeedX; ball.Y += ftime * ball.SpeedY; } else { updates = 100; ball.X += ftime * ball.SpeedX; ball.Y += ftime * ball.SpeedY; } //autorun :) paddleLeft.Y = ball.Y; //collision detection PongBar = nrPaddle.None; if (ball.Boundary.Intersects(paddleLeft.Boundary)) { PongBar = nrPaddle.Left; if (!isColliding) { Axes.Clear(); Axes.AddRange(paddleLeft.Normal2EdgesVector); //axis from nearest vertex to ball's center Axes.Add(FORMULAS.NormAxisFromCircle2ClosestVertex(paddleLeft.VertexVector, ball.ballCenter)); } } else if (ball.Boundary.Intersects(paddleRight.Boundary)) { PongBar = nrPaddle.Right; if (!isColliding) { Axes.Clear(); Axes.AddRange(paddleRight.Normal2EdgesVector); //axis from nearest vertex to ball's center Axes.Add(FORMULAS.NormAxisFromCircle2ClosestVertex(paddleRight.VertexVector, ball.ballCenter)); } } if (PongBar != nrPaddle.None && !isColliding) switch (PongBar) { case nrPaddle.Left: if (ShapesIntersect(paddleLeft, ball)) { isColliding = true; if (MTV != Vector2.Zero) ball.X += MTV.X; ball.Y += MTV.Y; ball.ChangeHorzDirection(); } break; case nrPaddle.Right: if (ShapesIntersect(paddleRight, ball)) { isColliding = true; if (MTV != Vector2.Zero) ball.X += MTV.X; ball.Y += MTV.Y; ball.ChangeHorzDirection(); } break; default: break; } if (!ShapesIntersect(paddleRight, ball) && !ShapesIntersect(paddleLeft, ball)) isColliding = false; ball.X += ftime * ball.SpeedX; ball.Y += ftime * ball.SpeedY; //check ball movement if (ball.X > paddleRight.X + paddleRight.Width + 2) { //IncreaseScore(Left); ball.Reset(); updates = 0; return; } else if (ball.X < paddleLeft.X - 2) { //IncreaseScore(Right); ball.Reset(); updates = 0; return; } base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Aquamarine); spriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend); spriteBatch.End(); base.Draw(gameTime); } } And one method i've used: public static Vector2 NormAxisFromCircle2ClosestVertex(Vector2[] vertices, Vector2 circle) { Vector2 temp = Vector2.Zero; if (vertices.Length > 0) { float dist = (circle.X - vertices[0].X) * (circle.X - vertices[0].X) + (circle.Y - vertices[0].Y) * (circle.Y - vertices[0].Y); for (int i = 1; i < vertices.Length;i++) { if (dist > (circle.X - vertices[i].X) * (circle.X - vertices[i].X) + (circle.Y - vertices[i].Y) * (circle.Y - vertices[i].Y)) { temp = vertices[i]; //memorize the closest vertex dist = (circle.X - vertices[i].X) * (circle.X - vertices[i].X) + (circle.Y - vertices[i].Y) * (circle.Y - vertices[i].Y); } } temp = circle - temp; temp.Normalize(); } return temp; } Thanks in advance for any tips on the 4 issues. EDIT1: Something isn't working properly. The collision axis doesn't come out right and the interpolation also seems to have no effect. I've changed the code a bit: private bool ShapesIntersect(Paddle paddle, Ball ball) { overlap = 1000000f; //large value overlapAxis = Vector2.Zero; MTV = Vector2.Zero; foreach (Vector2 ax in Axes) { float[] pad = paddle.ProjectPaddle(ax); //pad0 = min, pad1 = max float[] circle = ball.ProjectBall(ax); //circle0 = min, circle1 = max if (pad[1] < circle[0] || circle[1] < pad[0]) { return false; } if (Math.Abs(pad[1] - circle[0]) < Math.Abs(circle[1] - pad[0])) { if (Math.Abs(overlap) > Math.Abs(-pad[1] + circle[0])) { overlap = -pad[1] + circle[0]; overlapAxis = ax * (-1); } //to get the proper axis } else { if (Math.Abs(overlap) > Math.Abs(circle[1] - pad[0])) { overlap = circle[1] - pad[0]; overlapAxis = ax; } } } if (overlapAxis != Vector2.Zero) { MTV = overlapAxis * Math.Abs(overlap); } return true; } And part of the Update method: if (ShapesIntersect(paddleRight, ball)) { isColliding = true; if (MTV != Vector2.Zero) { ball.X += MTV.X; ball.Y += MTV.Y; } //test if (overlapAxis.X == 0) //collision with horizontal edge { } else if (overlapAxis.Y == 0) //collision with vertical edge { float factor = Math.Abs(ball.ballCenter.Y - paddleRight.Y) / paddleRight.Height; if (factor > 1) factor = 1f; if (overlapAxis.X < 0) //left edge? ball.Speed = ball.DEFAULTSPEED * Vector2.Normalize(Vector2.Reflect(ball.Speed, (Vector2.Lerp(new Vector2(-1, -3), new Vector2(-1, 3), factor)))); else //right edge? ball.Speed = ball.DEFAULTSPEED * Vector2.Normalize(Vector2.Reflect(ball.Speed, (Vector2.Lerp(new Vector2(1, -3), new Vector2(1, 3), factor)))); } else //vertex collision??? { ball.Speed = -ball.Speed; } } What seems to happen is that "overlapAxis" doesn't always return the right one. So instead of (-1,0) i get the (1,0) (this happened even before i multiplied with -1 there). Sometimes there isn't even a collision registered even though the ball passes through the paddle... The interpolation also seems to have no effect as the angles barely change (or the overlapAxis is almost never (-1,0) or (1,0) but something like (0.9783473, 0.02743843)... ). What am i missing here? :(

    Read the article

  • APC (PHP Cache) Uptime 0 minutes, not caching

    - by Jussi
    My goal is to implement APC for opcode cache for a drupal 6 production site. I have so far tested APC with several php files with and without including other php files with include_once. Also tried to tweak the apc.ini values for shm_size, apc.include_once_override and apc.stat. Restarted apache every time. Resulting in apc.php not showing any changes in any values. (except of course the changed apc.ini values are shown as they should) Every time i refresh the apc.php test page, the start time resets as the current time showing uptime 0 minutes. apc.php -testpage shows: General Cache InformationAPC Version 3.1.9 PHP Version 5.2.10 APC Host xxxx.xx.xx Server Software Apache/2.2.3 (CentOS) Shared Memory 1 Segment(s) with 128.0 MBytes (mmap memory, pthread mutex Locks locking) Start Time 2011/07/26 11:53:56 Uptime 0 minutes File Upload Support 1 Cached Files 0 ( 0.0 Bytes) Hits 1 Misses 1 Request Rate (hits, misses) 2.00 cache requests/second Hit Rate 1.00 cache requests/second Miss Rate 1.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 16 apc.mmap_file_mask /tmp/apcphp5.095eRm apc.num_files_hint 1024 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 128M apc.slam_defense 0 apc.stat 0 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1 Host Status Diagrams: Free: 128.0 MBytes (100.0%) Hits: 1 (50.0%) Used: 20.3 KBytes (0.0%) Misses: 1 (50.0%) Detailed Memory Usage and Fragmentation: Fragmentation: 0% phpinfo shows: Server API CGI/FastCGI APC: Version 3.1.9 APC Debugging Enabled MMAP Support Enabled MMAP File Mask /tmp/apcphp5.JkKDk7 Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Jul 21 2011 14:31:12 I followed these steps to find if suexec settings would prevent caching: http://www.litespeedtech.com/support/forum/showthread.php?t=4189 [root@host /]# ps -ef|grep lsphp root 20402 17833 0 11:21 pts/0 00:00:00 grep lsphp [root@host /]# ps -waux root 17833 0.0 0.1 5004 1484 pts/0 S 10:39 0:00 bash ..indicates that there is no lsphp running on the host also I read the following article and comments, concluding that in my case the problem is not the suexec as the user apache is the httpd process owner http://www.brandonturner.net/blog/2009/07/fastcgi_with_php_opcode_cache/ also suexec command is not recognized when logged and launced as root @ host also i'm almost confident that there is no cPanel running on the host to check if a setting there would reset the running cache process at some interval This leaves me with few clues where to head next. I tried to set (with chown and chgrp) apache as the owner of the apc.php file and some test php files resulting in 500 server error. Is there a way to check if the file permissions prevent the apc stay running? I'm tremendously grateful for any suggestions or help.

    Read the article

  • APC File Cache not working but user cache is fine

    - by danishgoel
    I have just got a VPS (with cPanel/WHM) to test what gains i could get in my application with using apc file cache AND user cache. So firstly I got the PHP 5.3 compiled in as a DSO (apache module). Then installed APC via PECL through SSH. (First I tried with WHM Module installer, it also had the same problem, so I tried it via ssh) All seemed fine and phpinfo showed apc loaded and enabled. Then I checked with apc.php. All seemed OK But as I started testing my php application, the stats in apc for File Cache Information state: Cached Files 0 ( 0.0 Bytes) Hits 1 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Which meant no PHP files were being cached, even though I had browsed through over 10 PHP files having multiple includes. So there must have been some Cached Files. But the user cache is functioning fine. User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 1000 Misses 1000 Request Rate (hits, misses) 0.84 cache requests/second Hit Rate 0.42 cache requests/second Miss Rate 0.42 cache requests/second Insert Rate 0.84 cache requests/second Cache full count 0 Its actually from an APC caching test script which tries to retrieve and store 1000 entries and gives me the times. A sort of simple benchmark. Can anyone help me here. Even though apc.cache_by_default = 1, no php files are being cached. This is my apc config Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 32M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1 Also most php files are under 20KB, thus, apc.max_file_size = 1M is not the cause. I have also tried using 'apc_compile_file ' to force some files into opcode cache with no luck. I have also re-installed APC with Debugging enabled, but nothing shows in the error_log I have also tried setting mmap_file_mask to /dev/zero and /tmp/apc.xxxxxx, i have also set /tmp permissions to 777 to no avail Any clue anyone. Update: I have tried following things and none cause APC file cache to populate 1. set apc.enable_cli = 1 AND run a script from cli 2. Set apc.max_file_size = 5M (just in case) 3. switched php handler from dso to FastCGI in WHM (then switched it back to dso as it did not solve the problem) 4. Even tried restarting the container

    Read the article

  • What is the best way to optimize clearcase dynamic view performance on a linux client?

    - by gambit
    cleartool getcache -view -cview Lookup cache: 94% full, 6080 entries (307.0K), 308777 requests, 86% hits Readdir cache: 77% full, 4534 entries (1259.4K), 52233 requests, 91% hits Fstat cache: 89% full, 6188 entries (870.2K), 137811 requests, 100% hits Object cache: 100% full, 6188 entries (1146.9K), 290977 requests, 42% hits Total memory used for view caches: 3583.5Kbytes The current view server cache limits are: Lookup cache: 335520 bytes Readdir cache: 1677721 bytes Fstat cache: 1006560 bytes Object cache: 1174320 bytes Total cache size limit: 4194304 bytes Should I try to get my Object cache hit to be 100%? I have 2GB RAM.

    Read the article

  • Will spreading your servers load not just consume more recourses

    - by Saif Bechan
    I am running a heavy real-time updating website. The amount of recourses needed per user are quite high, ill give you an example. Setup Every visit The application is php/mysql so on every visit static and dynamic content is loaded. Recourses: apache,php,mysql Every second (no more than a second will just be too long) The website needs to be updated real-time so every second there is an ajax call thats updates the website. Recourses: jQuery,apache,php,mysql Avarage spending for single user (spending one minute and visited 3 pages) Apache: +/- 63 requests / responsess serving static and dynamic content (img,css,js,html) php: +/- 63 requests / responses mysql: +/- 63 requests / responses jquery: +/- 60 requests / responses Optimization I want to optimize this process, but I think that maybe it would be just the same in the end. Before implementing and testing (which will take weeks) I wanted to have some second opinions from you guys. Every visit I want to start off with having nginx in the front and work as a proxy to deliver the static content. Recources: Dynamic: apache,php,mysql Static: nginx This will spread the load on apache a lot. Every Second For the script that loads every second I want to set up Node.js server side javascript with nginx in te front. I want to set it up that jquery makes a request ones a minute, and node.js streams the data to the client every second. Recources: jQuery,nginx,node.js,mysql Avarage spending for single user (spending one minute and visited 3 pages) Nginx: 4 requests / responsess serving mostly static conetent(img,css,js) Apache: 3 requests only the pages php: 3 requests only the pages node.js: 1 request / 60 responses jquery: 1 request / 60 responses mysql: 63 requests / responses Optimization As you can see in the optimisation the load from Apache and PHP are lifted and places on nginx and node.js. These are known for there light footprint and good performance. But I am having my doubts, because there are still 2 programs extra loaded in the memory and they consume cpu. So it it better to have less programs that do the job, or more. Before I am going to spend a lot of time setting this up I would like to know if it will be worth the while.

    Read the article

  • How can I draw a log-normalized imshow plot with a colorbar representing the raw data in matplotlib

    - by Adam Fraser
    I'm using matplotlib to plot log-normalized images but I would like the original raw image data to be represented in the colorbar rather than the [0-1] interval. I get the feeling there's a more matplotlib'y way of doing this by using some sort of normalization object and not transforming the data beforehand... in any case, there could be negative values in the raw image. import matplotlib.pyplot as plt import numpy as np def log_transform(im): '''returns log(image) scaled to the interval [0,1]''' try: (min, max) = (im[im > 0].min(), im.max()) if (max > min) and (max > 0): return (np.log(im.clip(min, max)) - np.log(min)) / (np.log(max) - np.log(min)) except: pass return im a = np.ones((100,100)) for i in range(100): a[i] = i f = plt.figure() ax = f.add_subplot(111) res = ax.imshow(log_transform(a)) # the colorbar drawn shows [0-1], but I want to see [0-99] cb = f.colorbar(res) I've tried using cb.set_array, but that didn't appear to do anything, and cb.set_clim, but that rescales the colors completely. Thanks in advance for any help :)

    Read the article

  • @media queries - one rule overrides another?

    - by John
    I have multiple @media queries all working fine but as soon as i put in a higher max screen-width than 1024px the rules for the higher width gets applied to everything. @media screen and (max-width: 1400px) { #wrap { width: 72%; } } @media screen and (max-width: 1024px) { #slider h2 { width: 100%; } #slider img { margin: 60px 0.83333333333333% 0 2.08333333333333%; } .recent { width: 45.82%; margin: 10px 2.08333333333334% 0 1.875%; } } as you can see 1024px (and also the 800px max-width query) do not change the #wrap width and work fine. As soon as i add the 1400px max-width query it changes them to 72% for ALL sizes and does the same for any element - for instance if i set #slider img to have a margin of 40px it will show at ALL sizes even though it is only in the max-width of 1400px. Am i missing something really obvious? Been trying to work this out for the past 2 days! Thanks, John

    Read the article

  • what is the best setting for using lighttpd on 8G ram?

    - by user299415
    I have running 8GB ram and 8 x Xeon 3361 system! What is the best setting for running simultaneous connection! What is the maximum? Is setting like this correct? server.max-keep-alive-requests = 0 server.max-keep-alive-idle = 10 server.max-read-idle = 60 server.max-write-idle = 60 server.event-handler = "linux-sysepoll" server.max-fds = 2048 fastcgi.server = ( ".php" = ( "localhost" = ( "socket" = "/tmp/php-fastcgi.socket", "bin-path" = "/usr/bin/php-cgi", "max-procs" = 20, "bin-environment" = ( "PHP_FCGI_CHILDREN" = "40", "PHP_FCGI_MAX_REQUESTS" = "800" ), "broken-scriptfilename" = "enable" ) ) ) please help me!

    Read the article

  • Circular Tally Counter Not Rolling Over

    - by chris ward
    I was practicing my java and was trying to make a simple counter with rollover at max, but for some reason it isn't rolling over. Any advice? public class HandTallyCounter { private int max; private int count; public HandTallyCounter(int max) { this.max = max; count = 0; } public void click() { if (count++ > max) { count = 0; } } public int getCount() { return count; } public void reset() { count = 0; } }

    Read the article

  • IN r, how to combine the summary together

    - by alex
    say i have 5 summary for 5 sets of data. how can i get those number out or combine the summary in to 1 rather than 5 V1 V2 V3 V4 Min. : 670.2 Min. : 682.3 Min. : 690.7 Min. : 637.6 1st Qu.: 739.9 1st Qu.: 737.2 1st Qu.: 707.7 1st Qu.: 690.7 Median : 838.6 Median : 798.6 Median : 748.3 Median : 748.3 Mean : 886.7 Mean : 871.0 Mean : 869.6 Mean : 865.4 3rd Qu.:1076.8 3rd Qu.:1027.6 3rd Qu.:1070.0 3rd Qu.: 960.8 Max. :1107.8 Max. :1109.3 Max. :1131.3 Max. :1289.6 V5 Min. : 637.6 1st Qu.: 690.7 Median : 748.3 Mean : 924.3 3rd Qu.: 960.8 Max. :1584.3 how can i have 1 table looks like v1 v2 v3 v4 v5 Min. : 1st Qu.: Median : Mean : 3rd Qu.: Max. : or how to save those number as vector so i can use matrix to generate a table

    Read the article

  • fill an array with Int like a Char; C++, cin object

    - by Duknov007
    This is a pretty simple question; first time poster and long time looker. Here is my binary to decimal converter I wrote: #include <iostream> #include <cmath> using namespace std; const int MAX = 6; int conv(int z[MAX], int l[6], int MAX); int main() { int zelda[MAX]; const int d = 6; int link[d]; cout << "Enter a binary number: \n"; int i = 0; while (i < MAX && (cin >> zelda[i]).get()) //input loop { ++i; } cout << conv(zelda, link, MAX); cin.get(); return 0; } int conv(int zelda[MAX], int link[6], int MAX) { int sum = 0; for (int t = 0; t < MAX; t++) { long int h, i; for (int h = 5, i = 0; h >= 0; --h, ++i) if (zelda[t] == 1) link[h] = pow(2.0, i); else link[h] = 0; sum += link[t]; } return sum; } With the way the input loop is being handled, I have to press enter after each input of a number. I haven't added any error correction yet either (and some of my variables are vague), but would like to enter a binary say 111111 instead of 1 enter, 1 enter, 1 enter, etc to fill the array. I am open to any technique and other suggestions. Maybe input it as a string and convert it to an int? I will keep researching. Thanks.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >