Search Results

Search found 19441 results on 778 pages for 'static libraries'.

Page 113/778 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Setting up a "cookieless domain" to improve site performance

    - by Django Reinhardt
    I was reading in Google's documentation about improving site speed. One of their recommendations is serving static content (images, css, js, etc.) from a "cookieless domain": Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies. Google then says that the best way to do this is to buy a new domain and set it to point to your current one: To reserve a cookieless domain for serving static content, register a new domain name and configure your DNS database with a CNAME record that points the new domain to your existing domain A record. Configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain. In your web pages, reference the domain name in the URLs for the static resources. This is pretty straight forward stuff, except for the bit where it says to "configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain". From what I've read, there's no setting in IIS that allows you to say "serve static resources", so how do I prevent ASP.NET from setting cookies on this new domain? At present, even if I'm just requesting a .jpg from the new domain, it sets a cookie on my browser, even though our application's cookies are set to our old domain. For example, ASP.NET sets an ".ASPXANONYMOUS" cookie that (as far as I'm aware) we're not telling it to do. Apologies if this is a real newb question, I'm new at this! Thanks.

    Read the article

  • Something very strange with network

    - by Rodnower
    Hello, I have Windows 7 and I have very strange thing with my network. Some time I was connected through wireless router and my IP was 192.168.2.103, router's IP was 192.168.2.1 and some other IP was 192.168.2.100. The last I get from page "active DHCP clients" of web interface of the router and from "wireless clients" I may to see that 192.168.2.100 not (!) belong to my MAC address. Router build by EDimax. So after that I disabled wireless function of the router and restarted it. In this time I had not ping to 192.168.2.1. Also I had not any other connection, not wireless nor cable, but (!) I still had ping to 192.168.2.100 and I not understand what this voodoo is... C:\Users\Andrey>ping 192.168.2.100 Pinging 192.168.2.100 with 32 bytes of data: Reply from 192.168.2.100: bytes=32 time<1ms TTL=128 Reply from 192.168.2.100: bytes=32 time<1ms TTL=128 Reply from 192.168.2.100: bytes=32 time<1ms TTL=128 Reply from 192.168.2.100: bytes=32 time<1ms TTL=128 Ping statistics for 192.168.2.100: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms This is what I had: C:\Users\Andrey>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : Andrey-PC Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Wireless LAN adapter Wireless Network Connection 3: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft Virtual WiFi Miniport Adapter #2 Physical Address. . . . . . . . . : 06-1D-7D-40-61-EB DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Wireless LAN adapter Wireless Network Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Gigabyte GN-WS50G (mini) PCI-E WLAN Card Physical Address. . . . . . . . . : 00-1D-7D-40-61-EB DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Ethernet adapter Local Area Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Marvell Yukon 88E8055 PCI-E Gigabit Ethernet Controller Physical Address. . . . . . . . . : 00-1B-24-B6-09-91 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes C:\Users\Andrey>arp -a -v Interface: 127.0.0.1 --- 0x1 Internet Address Physical Address Type 224.0.0.22 static 239.255.255.250 static Interface: 0.0.0.0 --- 0xffffffff Internet Address Physical Address Type 192.168.2.1 00-0e-2e-d2-8c-af invalid 192.168.2.255 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 239.255.255.250 01-00-5e-7f-ff-fa static 255.255.255.255 ff-ff-ff-ff-ff-ff static Interface: 0.0.0.0 --- 0xffffffff Internet Address Physical Address Type 192.168.2.1 00-0e-2e-ff-f1-f6 dynamic 192.168.2.101 00-27-19-bc-8b-9c dynamic 192.168.2.102 00-16-e6-6c-ae-d4 dynamic 192.168.2.255 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 239.255.255.250 01-00-5e-7f-ff-fa static 255.255.255.255 ff-ff-ff-ff-ff-ff static Interface: 0.0.0.0 --- 0xffffffff Internet Address Physical Address Type 224.0.0.22 01-00-5e-00-00-16 static 255.255.255.255 ff-ff-ff-ff-ff-ff static C:\Users\Andrey>route print =========================================================================== Interface List 14...06 1d 7d 40 61 eb ......Microsoft Virtual WiFi Miniport Adapter #2 13...00 1d 7d 40 61 eb ......Gigabyte GN-WS50G (mini) PCI-E WLAN Card 11...00 1b 24 b6 09 91 ......Marvell Yukon 88E8055 PCI-E Gigabit Ethernet Controller 1...........................Software Loopback Interface 1 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 =========================================================================== Persistent Routes: None IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 1 306 ff00::/8 On-link =========================================================================== Persistent Routes: None Only after reboot I lost ping to there: C:\Users\Andrey>ping 192.168.2.100 Pinging 192.168.2.100 with 32 bytes of data: PING: transmit failed. General failure. PING: transmit failed. General failure. PING: transmit failed. General failure. PING: transmit failed. General failure. Ping statistics for 192.168.2.100: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), So what this mysterious cache is? Thank you for ahead.

    Read the article

  • Wrappers/law of demeter seems to be an anti-pattern...

    - by Robert Fraser
    I've been reading up on this "Law of Demeter" thing, and it (and pure "wrapper" classes in general) seem to generally be anti patterns. Consider an implementation class: class Foo { void doSomething() { /* whatever */ } } Now consider two different implementations of another class: class Bar1 { private static Foo _foo = new Foo(); public static Foo getFoo() { return _foo; } } class Bar2 { private static Foo _foo = new Foo(); public static void doSomething() { _foo.doSomething(); } } And the ways to call said methods: callingMethod() { Bar1.getFoo().doSomething(); // Version 1 Bar2.doSomething(); // Version 2 } At first blush, version 1 seems a bit simpler, and follows the "rule of Demeter", hide Foo's implementation, etc, etc. But this ties any changes in Foo to Bar. For example, if a parameter is added to doSomething, then we have: class Foo { void doSomething(int x) { /* whatever */ } } class Bar1 { private static Foo _foo = new Foo(); public static Foo getFoo() { return _foo; } } class Bar2 { private static Foo _foo = new Foo(); public static void doSomething(int x) { _foo.doSomething(x); } } callingMethod() { Bar1.getFoo().doSomething(5); // Version 1 Bar2.doSomething(5); // Version 2 } In both versions, Foo and callingMethod need to be changed, but in Version 2, Bar also needs to be changed. Can someone explain the advantage of having a wrapper/facade (with the exception of adapters or wrapping an external API or exposing an internal one).

    Read the article

  • Incorrect deployment of WSGI app to AWS using Elastic Beanstalk

    - by Dzmitry Zhaleznichenka
    cross-link to AWS forums I have developed a simple Python web service using WSGI and would like to deploy it to AWS cloud using Elastic Beanstalk. My problem is I cannot make all the options I specify in Elastic Beanstalk configuration to be correctly configured in the cloud. For deployment, I use Elastic Beanstalk CLI utility. I have run eb init command and set up the required parameters. After this, a directory named .elasticbeanstalk was created in my source tree. It has two config files that are used for deployment, namely config and optionsettings. The latter one among the other options contains the WSGI configuration that has to update /etc/httpd/conf.d/wsgi.conf at the instances. After some of my adjustments the file has the following settings: [aws:elasticbeanstalk:application:environment] DJANGO_SETTINGS_MODULE = PARAM1 = PARAM2 = PARAM4 = PARAM3 = PARAM5 = [aws:elasticbeanstalk:container:python] WSGIPath = handler.py NumProcesses = 2 StaticFiles = /static= NumThreads = 10 [aws:elasticbeanstalk:container:python:staticfiles] /static = static/ [aws:elasticbeanstalk:hostmanager] LogPublicationControl = false [aws:autoscaling:launchconfiguration] InstanceType = t1.micro EC2KeyName = zmicier-aws [aws:elasticbeanstalk:application] Application Healthcheck URL = [aws:autoscaling:asg] MaxSize = 10 MinSize = 1 Custom Availability Zones = [aws:elasticbeanstalk:monitoring] Automatically Terminate Unhealthy Instances = true [aws:elasticbeanstalk:sns:topics] Notification Endpoint = Notification Protocol = email It turns out that not all of these options are considered when I start the environment or update it. Thus, when I update NumThreads or NumProcesses, the respective parameters get changed in wsgi.conf as expected. But whatever I write to the WSGIPath and StaticFiles parameters, I'm not able to automatically change the respective values of wsgi.conf, they remain Alias /static /opt/python/current/app/ WSGIScriptAlias / /opt/python/current/app/application.py which drives me nuts. Moreover, when I deploy my application using git aws.push and having the following contents of .ebextensions/python.config file, neither of options I specify in it affects the deployment. option_settings: - namespace: aws:elasticbeanstalk:container:python option_name: WSGIPath value: mysite/wsgi.py - namespace: aws:elasticbeanstalk:container:python option_name: NumProcesses value: 5 - namespace: aws:elasticbeanstalk:container:python option_name: NumThreads value: 25 - namespace: aws:elasticbeanstalk:container:python:staticfiles option_name: /static/ value: app/static/ I wonder what I should do to force AWS use all the parameters I specify in the configuration, namely the WSGI Path and path to my static data.

    Read the article

  • Error invoking stored procedure with input parameter from ADO.Net

    - by George2
    Hello everyone, I am using VSTS 2008 + C# + .Net 3.5 + ADO.Net. Here is my code and related error message. The error message says, @Param1 is not supplied, but actually it is supplied in my code. Any ideas what is wrong? System.Data.SqlClient.SqlException: Procedure or function 'Pr_Foo' expects parameter '@Param1', which was not supplied. class Program { private static SqlCommand _command; private static SqlConnection connection; private static readonly string _storedProcedureName = "Pr_Foo"; private static readonly string connectionString = "server=.;integrated Security=sspi;initial catalog=FooDB"; public static void Prepare() { connection = new SqlConnection(connectionString); connection.Open(); _command = connection.CreateCommand(); _command.CommandText = _storedProcedureName; _command.CommandType = CommandType.StoredProcedure; } public static void Dispose() { connection.Close(); } public static void Run() { try { SqlParameter Param1 = _command.Parameters.Add("@Param1", SqlDbType.Int, 300101); Param1.Direction = ParameterDirection.Input; SqlParameter Param2 = _command.Parameters.Add("@Param2", SqlDbType.Int, 100); portal_SiteInfoID.Direction = ParameterDirection.Input; SqlParameter Param3 = _command.Parameters.Add("@Param3", SqlDbType.Int, 200); portal_RoleInfoID.Direction = ParameterDirection.Input; _command.ExecuteScalar(); } catch (Exception e) { Console.WriteLine(e); } } static void Main(string[] args) { try { Prepare(); Thread t1 = new Thread(Program.Run); t1.Start(); t1.Join(); Dispose(); } catch (Exception ex) { Console.WriteLine(ex.Message + "\t" + ex.StackTrace); } } } Thanks in advance, George

    Read the article

  • Java the little console game won't repeat?

    - by Jony Kale
    Okay, what I have so far is: You enter the game, and write "spin" to the console. Program will enter the while loop. In the while loop, if entered int is -1, return to the back (Set console input back to "", and let the user select what game he would like to play). Problem: Instead of going back, and selecting "spin" again, the program exits? Why is it happening? How can I fix this? private static Scanner console = new Scanner(System.in); private static Spin spin = new Spin(); private static String inputS = ""; private static int inputI = 0; private static String[] gamesArray = new String[] {"spin", "tof"}; private static boolean spinWheel = false; private static boolean tof = false; public static void main (String[] args) { if (inputS.equals("")) { System.out.println("Welcome to the system!"); System.out.print("Please select a game: "); inputS = console.nextLine(); } while (inputS.equals("spin")) { System.out.println("Welcome to the spin game! Please write 1 to spin. and -1 to exit back"); inputI = console.nextInt(); switch (inputI) { case 1: break; case -1: inputI = 0; inputS = ""; break; } } }

    Read the article

  • Install i486 .package onto x64 CentOS

    - by medoix
    I am trying to install a ".package" file with Autopackage onto my x64 CentOS server and i receive the below statement. -sh-3.2$ bash armagetronad-dedicated-0.2.8.3.1.i486-generic-linux-gnu.package Sorry, Autopackage only supports x86 32-bit systems, or 64-bit systems with compatibility libraries installed. Please install the compatibility libraries and rerun install. However i cannot find any documentation on what 32-Bit libraries are required or even where to start... Any ideas or suggestions would be appreciated greatly.

    Read the article

  • Problems when I try to see databases in SQLite

    - by Sabau Andreea
    I created in code a database and two tables: static final String dbName="graficeCirculatie"; static final String ruteTable="Rute"; static final String colRuteId="RutaID"; static final String colRuta="Ruta"; static final String statiaTable="Statia"; static final String colStatiaID="StatiaID"; static final String colIdRuta="IdRuta"; static final String colStatia="Statia"; public DatabaseHelper(Context context) { super(context, dbName, null,33); } public void onCreate(SQLiteDatabase db) { // TODO Auto-generated method stub db.execSQL("CREATE TABLE " + statiaTable + " (" + colStatiaID + " INTEGER PRIMARY KEY , " + colIdRuta + " INTEGER, " + colStatia + " TEXT)"); db.execSQL("CREATE TABLE " + ruteTable + "(" + colRuteId + " INTEGER PRIMARY KEY AUTOINCREMENT, " + colRuta + " TEXT);"); InsertDepts(db); } void InsertDepts(SQLiteDatabase db) { ContentValues cv = new ContentValues(); cv.put(colRuteId, 1); cv.put(colRuta, "Expres8"); db.insert(ruteTable, colRuteId, cv); cv.put(colRuteId, 2); cv.put(colRuta, "Expres2"); db.insert(ruteTable, colRuteId, cv); cv.put(colRuteId, 3); cv.put(colRuta, "Expres3"); db.insert(ruteTable, colRuteId, cv); } Now I want to see tables inputs from command line. I try in this way: C:\Program Files\Android\android-sdk\tools sqlite3 SQLite version 3.7.4 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite sqlite3 graficeCirculatie ... select * from ruteTable; And I got an error: Error: near "squlite3": syntax error. Can someone help me?

    Read the article

  • Using java.util.regex in Android apps - are there issues with this?

    - by johnrock
    In an Android app I have a utility class that I use to parse strings for 2 regEx's. I compile the 2 patterns in a static initializer so they only get compiled once, then activities can use the parsing methods statically. This works fine except that the first time the class is accessed and loaded, and the static initializer compiles the pattern, the UI hangs for close to a MINUTE while it compiles the pattern! After the first time, it flies on all subsequent calls to parseString(). My regEx that I am using is rather large - 847 characters, but in a normal java webapp this is lightning fast. I am testing this so far only in the emulator with a 1.5 AVD. Could this just be an emulator issue or is there some other reason that this pattern is taking so long to compile? private static final String exp1 = "(insertratherlong---847character--regexhere)"; private static Pattern regex1 = null; private static final String newLineAndTagsExp = "[<>\\s]"; private static Pattern regexNewLineAndTags = null; static { regex1 = Pattern.compile(exp1, Pattern.CASE_INSENSITIVE); regexNewLineAndTags = Pattern.compile(newLineAndTagsExp); } public static String parseString(CharSequence inputStr) { String replacementStr = "replaceMentText"; String resultString = "none"; try { Matcher regexMatcher = regex1.matcher(inputStr); try { resultString = regexMatcher.replaceAll(replacementStr); } catch (IllegalArgumentException ex) { } catch (IndexOutOfBoundsException ex) { } } catch (PatternSyntaxException ex) { } return resultString; }

    Read the article

  • error invoking store procedure with input parameter from ADO.Net

    - by George2
    Hello everyone, I am using VSTS 2008 + C# + .Net 3.5 + ADO.Net. Here is my code and related error message. The error message says, @Param1 is not supplied, but actually it is supplied in my code. Any ideas what is wrong? System.Data.SqlClient.SqlException: Procedure or function 'Pr_Foo' expects parameter '@Param1', which was not supplied. class Program { private static SqlCommand _command; private static SqlConnection connection; private static readonly string _storedProcedureName = "Pr_Foo"; private static readonly string connectionString = "server=.;integrated Security=sspi;initial catalog=FooDB"; public static void Prepare() { connection = new SqlConnection(connectionString); connection.Open(); _command = connection.CreateCommand(); _command.CommandText = _storedProcedureName; _command.CommandType = CommandType.StoredProcedure; } public static void Dispose() { connection.Close(); } public static void Run() { try { SqlParameter Param1 = _command.Parameters.Add("@Param1", SqlDbType.Int, 300101); Param1.Direction = ParameterDirection.Input; SqlParameter Param2 = _command.Parameters.Add("@Param2", SqlDbType.Int, 100); portal_SiteInfoID.Direction = ParameterDirection.Input; SqlParameter Param3 = _command.Parameters.Add("@Param3", SqlDbType.Int, 200); portal_RoleInfoID.Direction = ParameterDirection.Input; _command.ExecuteScalar(); } catch (Exception e) { Console.WriteLine(e); } } static void Main(string[] args) { try { Prepare(); Thread t1 = new Thread(Program.Run); t1.Start(); t1.Join(); Dispose(); } catch (Exception ex) { Console.WriteLine(ex.Message + "\t" + ex.StackTrace); } } } thanks in avdance, George

    Read the article

  • Java: creating objects of arrays with different names at runtime and accessing/updating them

    - by scriptingalias
    Hello, I'm trying to create a class that can instantiate arrays at runtime by giving each array a "name" created by the createtempobjectname() method. I'm having trouble making this program run. I would also like to see how I could access specific objects that were created during runtime and accessing those arrays by either changing value or accessing them. This is my mess so far, which compiles but gets a runtime exception. import java.lang.reflect.Array; public class arrays { private static String temp; public static int name = 0; public static Object o; public static Class c; public static void main(String... args) { assignobjectname(); //getclassname();//this is supposed to get the name of the object and somehow //allow the arrays to become updated using more code? } public static void getclassname() { String s = c.getName(); System.out.println(s); } public static void assignobjectname()//this creates the object by the name returned { //createtempobjectname() try { String object = createtempobjectname(); c = Class.forName(object); o = Array.newInstance(c, 20); } catch (ClassNotFoundException exception) { exception.printStackTrace(); } } public static String createtempobjectname() { name++; temp = Integer.toString(name); return temp; } }

    Read the article

  • Setting up JDK 7 for IntelliJ on the Mac

    - by Fergal
    I installed the JDK by downloading the dmg from the Oracle website here: http://www.oracle.com/technetwork/java/javase/downloads/jdk7u9-downloads-1859576.html After installation I tried to setup the JDK in IntelliJ but when I set the location to the JDK in the Project Structure-SDKs screen, only a few libraries were loaded and many (including all libraries from Content/Classes/) were missing. How can I add all of the necessary libraries libraries? The install location for the JDK is /Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home I've tried looking in /System/Library/Frameworks/JavaVM.framework/Versions/ to no avail.

    Read the article

  • How can I install VLC on RHEL 6.3?

    - by holddame
    I'm having a problem installing VLC on Red hat 6.3 When I try to use yum install vlc all goes well until it shows me this in the end: Error: Package: vlc-2.0.3-6.el6.x86_64 (linuxtech-release) Requires: libminizip.so.1()(64bit) Error: Package: liblrdf-0.5.0-2.el6.x86_64 (linuxtech-release) Requires: ladspa Error: Package: libffado-2.1.0-0.8.20120325.svn2088.el6.x86_64 (linuxtech-release) Requires: libconfig++.so.8()(64bit) also I can't use yum update I'm running on a 32-bit processor and I don't know what's wrong. ok I'v installed live555 and tried again nothing really happened here is my yum whatprovides *BasicUsageEnviroment `live555-devel-0-0.34.2012.01.25.el6.x86_64 : Development files for live555.com streaming : libraries Repo : linuxtech-release Matched from: Filename : /usr/include/BasicUsageEnvironment live555-devel-0-0.34.2012.01.25.el6.i686 : Development files for live555.com streaming : libraries Repo : linuxtech-release Matched from: Filename : /usr/include/BasicUsageEnvironment live555-devel-0-0.27.2010.04.09.el6.rf.x86_64 : Development files for live555.com streaming : libraries Repo : rpmforge Matched from: Filename : /usr/include/BasicUsageEnvironment live555-devel-0-0.27.2012.02.04.el6.rf.x86_64 : Development files for live555.com streaming : libraries Repo : rpmforge Matched from: Filename : /usr/include/BasicUsageEnvironment

    Read the article

  • Stopping cookies being set from a domain (aka "cookieless domain") to increase site performance

    - by Django Reinhardt
    I was reading in Google's documentation about improving site speed. One of their recommendations is serving static content (images, css, js, etc.) from a "cookieless domain": Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies. Google then says that the best way to do this is to buy a new domain and set it to point to your current one: To reserve a cookieless domain for serving static content, register a new domain name and configure your DNS database with a CNAME record that points the new domain to your existing domain A record. Configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain. In your web pages, reference the domain name in the URLs for the static resources. This is pretty straight forward stuff, except for the bit where it says to "configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain". From what I've read, there's no setting in IIS that allows you to say "serve static resources", so how do I prevent ASP.NET from setting cookies on this new domain? At present, even if I'm just requesting a .jpg from the new domain, it sets a cookie on my browser, even though our application's cookies are set to our old domain. For example, ASP.NET sets an ".ASPXANONYMOUS" cookie that (as far as I'm aware) we're not telling it to do. Apologies if this is a real newb question, I'm new at this! Thanks.

    Read the article

  • Class initialization and synchronized class method

    - by nybon
    Hi there, In my application, there is a class like below: public class Client { public synchronized static print() { System.out.println("hello"); } static { doSomething(); // which will take some time to complete } } This class will be used in a multi thread environment, many threads may call the Client.print() method simultaneously. I wonder if there is any chance that thread-1 triggers the class initialization, and before the class initialization complete, thread-2 enters into print method and print out the "hello" string? I see this behavior in a production system (64 bit JVM + Windows 2008R2), however, I cannot reproduce this behavior with a simple program in any environments. In Java language spec, section 12.4.1 (http://java.sun.com/docs/books/jls/second_edition/html/execution.doc.html), it says: A class or interface type T will be initialized immediately before the first occurrence of any one of the following: T is a class and an instance of T is created. T is a class and a static method declared by T is invoked. A static field declared by T is assigned. A static field declared by T is used and the reference to the field is not a compile-time constant (§15.28). References to compile-time constants must be resolved at compile time to a copy of the compile-time constant value, so uses of such a field never cause initialization. According to this paragraph, the class initialization will take place before the invocation of the static method, however, it is not clear if the class initialization need to be completed before the invocation of the static method. JVM should mandate the completion of class initialization before entering its static method according to my intuition, and some of my experiment supports my guess. However, I did see the opposite behavior in another environment. Can someone shed me some light on this? Any help is appreciated, thanks.

    Read the article

  • why when I delete a parent on a one to many relationship on grails the beforeInsert event is called

    - by nico
    hello, I have a one to many relationship and when I try to delete a parent that haves more than one child the berforeInsert event gets called on the frst child. I have some code in this event that I mean to call before inserting a child, not when i'm deleting the parent! any ideas on what might be wrong? the entities: class MenuItem { static constraints = { name(blank:false,maxSize:200) category() subCategory(nullable:true, validator:{ val, obj -> if(val == null){ return true }else{ return obj.category.subCategories.contains(val)? true : ['invalid.category.no.subcategory'] } }) price(nullable:true) servedAtSantaMonica() servedAtWestHollywood() highLight() servedAllDay() dateCreated(display:false) lastUpdated(display:false) } static mapping = { extras lazy:false } static belongsTo = [category:MenuCategory,subCategory:MenuSubCategory] static hasMany = [extras:MenuItemExtra] static searchable = { extras component: true } String name BigDecimal price Boolean highLight = false Boolean servedAtSantaMonica = false Boolean servedAtWestHollywood = false Boolean servedAllDay = false Date dateCreated Date lastUpdated int displayPosition void moveUpDisplayPos(){ def oldDisplayPos = MenuItem.get(id).displayPosition if(oldDisplayPos == 0){ return }else{ def previousItem = MenuItem.findByCategoryAndDisplayPosition(category,oldDisplayPos - 1) previousItem.displayPosition += 1 this.displayPosition = oldDisplayPos - 1 this.save(flush:true) previousItem.save(flush:true) } } void moveDownDisplayPos(){ def oldDisplayPos = MenuItem.get(id).displayPosition if(oldDisplayPos == MenuItem.countByCategory(category) - 1){ return }else{ def nextItem = MenuItem.findByCategoryAndDisplayPosition(category,oldDisplayPos + 1) nextItem.displayPosition -= 1 this.displayPosition = oldDisplayPos + 1 this.save(flush:true) nextItem.save(flush:true) } } String toString(){ name } def beforeInsert = { displayPosition = MenuItem.countByCategory(category) } def afterDelete = { def otherItems = MenuItem.findAllByCategoryAndDisplayPositionGreaterThan(category,displayPosition) otherItems.each{ it.displayPosition -= 1 it.save() } } } class MenuItemExtra { static constraints = { extraOption(blank:false, maxSize:200) extraOptionPrice(nullable:true) } static searchable = true static belongsTo = [menuItem:MenuItem] BigDecimal extraOptionPrice String extraOption int displayPosition void moveUpDisplayPos(){ def oldDisplayPos = MenuItemExtra.get(id).displayPosition if(oldDisplayPos == 0){ return }else{ def previousExtra = MenuItemExtra.findByMenuItemAndDisplayPosition(menuItem,oldDisplayPos - 1) previousExtra.displayPosition += 1 this.displayPosition = oldDisplayPos - 1 this.save(flush:true) previousExtra.save(flush:true) } } void moveDownDisplayPos(){ def oldDisplayPos = MenuItemExtra.get(id).displayPosition if(oldDisplayPos == MenuItemExtra.countByMenuItem(menuItem) - 1){ return }else{ def nextExtra = MenuItemExtra.findByMenuItemAndDisplayPosition(menuItem,oldDisplayPos + 1) nextExtra.displayPosition -= 1 this.displayPosition = oldDisplayPos + 1 this.save(flush:true) nextExtra.save(flush:true) } } String toString(){ extraOption } def beforeInsert = { if(menuItem){ displayPosition = MenuItemExtra.countByMenuItem(menuItem) } } def afterDelete = { def otherExtras = MenuItemExtra.findAllByMenuItemAndDisplayPositionGreaterThan(menuItem,displayPosition) otherExtras.each{ it.displayPosition -= 1 it.save() } } }

    Read the article

  • Unresolved external symbol error in c++

    - by Crystal
    I am trying to do a simple hw problem involving namespace, static data members and functions. I am getting an unresolved external symbol error Error 1 error LNK2001: unresolved external symbol "private: static double JWong::SavingsAccount::annualInterestRate" (?annualInterestRate@SavingsAccount@JWong@@0NA) SavingsAccount.obj SavingsAccount And I don't see why I am getting this error. Maybe I don't know something about static variables compared to regular data members that is causing this error. Here is my code: SavingsAccount.h file #ifndef JWONG_SAVINGSACCOUNT_H #define JWONG_SAVINGSACCOUNT_H namespace JWong { class SavingsAccount { public: // default constructor SavingsAccount(); // constructor SavingsAccount(double savingsBalance); double getSavingsBalance(); void setSavingsBalance(double savingsBalance); double calculateMonthlyInterest(); // static functions static void modifyInterestRate(double newInterestRate); static double getAnnualInterestRest(); private: double savingsBalance; // static members static double annualInterestRate; }; } #endif SavingsAccount.cpp file #include <iostream> #include "SavingsAccount.h" // default constructor, set savingsBalance to 0 JWong::SavingsAccount::SavingsAccount() : savingsBalance(0) {} // constructor JWong::SavingsAccount::SavingsAccount(double savingsBalance) : savingsBalance(savingsBalance) {} double JWong::SavingsAccount::getSavingsBalance() { return savingsBalance; } void JWong::SavingsAccount::setSavingsBalance(double savingsBalance) { this->savingsBalance = savingsBalance; } // returns monthly interest and sets savingsBalance to new amount double JWong::SavingsAccount::calculateMonthlyInterest() { double monthlyInterest = savingsBalance * SavingsAccount::annualInterestRate / 12; setSavingsBalance(savingsBalance + monthlyInterest); return monthlyInterest; } void JWong::SavingsAccount::modifyInterestRate(double newInterestRate) { SavingsAccount::annualInterestRate = newInterestRate; } double JWong::SavingsAccount::getAnnualInterestRest() { return SavingsAccount::annualInterestRate; }

    Read the article

  • Using nginx to rewrite urls inside outgoing responses

    - by Kev
    We have a customer with a site running on Apache. Recently the site has been seeing increased load and as a stop gap we want to shift all the static content on the site to a cookieless domains, e.g. http://static.thedomain.com. The application is not well understood. So to give the developers time to amend the code to point their links to the static content server (http://static.thedomain.com) I thought about proxying the site through nginx and rewriting the outgoing responses such that links to /images/... are rewritten as http://static.thedomain.com/images/.... So for example, in the response from Apache to nginx there is a blob of Headers + HTML. In the HTML returned from Apache we have <img> tags that look like: <img src="/images/someimage.png" /> I want to transform this to: <img src="http://static.thedomain.com/images/someimage.png" /> So that the browser upon receiving the HTML page then requests the images directly from the static content server. Is this possible with nginx (or HAProxy)? I have had a cursory glance through the docs but nothing jumped out at me except rewriting inbound urls.

    Read the article

  • C# adding list into list

    - by gencay
    I have a DocumentList.c as implemented below. And when I try to add a list into the instance of DocumentList object it adds but the others be the same class DocumentList { public static List wordList; public static string type; public static string path; public static double cos; public static double dice; public static double jaccard; //public static string title; public DocumentList(List wordListt, string typee, string pathh, double sm11, double sm22, double sm33) { type = typee; wordList = wordListt; path = pathh; cos = sm11; dice = sm22; jaccard = sm33; } } in main c#code fragment public partial class Window1 : System.Windows.Window { static private List documentList = new List(); ... in a method I use as below. DocumentList dt = new DocumentList(para1, para2, para3, para4, para5, para6); documentList.Add(dt); Now, When i add the first list it is ok it seems 1 item in documentList, but for the second one I get a list with 2 items but both the same.. I mean I cannot keep previous list item..

    Read the article

  • Java Generics error when implementing Hibernate message interpolator

    - by Jayaprakash
    Framework: Spring, Hibernate. O/S: Windows I am trying to implement hibernate's Custom message interpolator following the direction of this Link. When implementing the below class, it gives an error "Cannot make a static reference to the non-static type Locale". public class ClientLocaleThreadLocal<Locale> { private static ThreadLocal tLocal = new ThreadLocal(); public static void set(Locale locale) { tLocal.set(locale); } public static Locale get() { return tLocal.get(); } public static void remove() { tLocal.remove(); } } As I do not know generics enough, not sure how is being used by TimeFilter class below and the purpose of definition in the above class. public class TimerFilter implements Filter { public void destroy() { } public void doFilter(ServletRequest req, ServletResponse res, FilterChain filterChain) throws IOException, ServletException { try { ClientLocaleThreadLocal.set(req.getLocale()); filterChain.doFilter(req, res); }finally { ClientLocaleThreadLocal.remove(); } } public void init(FilterConfig arg0) throws ServletException { } } Will doing the following be okay? Change static method/field in ClientLocaleThreadLocal to non-static method/fields In TimeFilter, set locale by instantiating new object as below. new ClientLocaleThreadLocal().set(req.getLocale()) Thanks for your help in advance

    Read the article

  • RHEL 6 x64: running 32 bit applications

    - by user54614
    We develop an application which currently works in 32 bit mode only. It worked fine in RHEL 5 but failed to work in RHEL 6. The reason is RHEL 6 by default is installed with 64 bit libraries only. Moreover, we didn't find a way to choose installation of 32bit runtime environment during or after system installation. Of course, we did find a way to install three rpm packages with 32 bit libraries required for our application to work. But it looks like unpleasant for our customers (we have to install three rpms from the DVD in the command line). So the question is: Is there a convenient way for RHEL 6 customers to install 32 bit libraries it their RHEL 6 system? Say, any user-friendly item in menu or a special command that install the same set of 32 bit system libraries that existed in RHEL 5? What are best practicies in such cases?

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • Continuous integration never results in build errors

    - by Jon
    Hi, I'm working with a variety of Java EE websites which use internal libraries we've developed. For each website, we only upgrade to new versions of our internal libraries as needed, and before committing we make sure that the site compiles fine. What this means is that when TeamCity does a build of one of our sites, the site compiles fine, but later when the site is updated to the latest version of internal libraries, there might be a compile error. Is there a good way to handle this? We're not using Maven yet; would using Maven mean that our websites could automatically use the latest version of internal libraries? Thanks. Clarification: What we sometimes run into is this: Project A depends on a library, and is currently using library version 1.0 Project B also depends on that library. I make changes to the library so that it is now version 1.5. Project B now uses 1.5. Project A and project B have both been built just fine by the CI server (TeamCity) Working on project A again, I update to 1.5 and discover that 1.5 has breaking changes in it. Is there a way for the CI server to discover these kinds of breaking changes?

    Read the article

  • Semantic Versioning and splitting apart a library, providing a bundled build

    - by Derick Bailey
    I've got a nice, fairly popular JavaScript library that is following Semantic Versioning. The current library has a few dependency libraries, which are available either as separate downloads or as part of a single bundled download. I see a need to head down this path further. I want to extract additional, smaller libraries out of the one larger library. Each of these extracted libraries would be available as separate files, or inside of the one bundled build, again. If I go down this path of extracting the libraries, and providing a bundled version of the final code, does this require a full version change in semantic versioning? Would I have to bump from 1.x to 2.x? My first thought it no: I will not change any public API, so I don't have to change the major version number. But then I wonder... well, I am restructuring a lot of things, even though the final API for the bundled version would be the same. Is there a clear answer from semver on something like this? Do I need to bump first, second or third dot? Or something else?

    Read the article

  • Code base migration - old versioning system to modern

    - by JohnP
    Our current code base is contained in a versioning system that is old and outdated (Visual Sourcesafe 5.0, mid 1990's), and contains a mix of packages that are no longer used, ones that are being used but no longer updated, and newer code. It is also a mix of 4 languages, and includes libraries for some of our systems (Such as Dialogic, Sun Tzu {clipper}) implementations. This breaks down into the following categories: Legacy code - No longer used (Systems that have been retired or replaced, etc) Legacy code - In current use (No intentions for upgrades or minor bug fixes, only major fixes if needed) Current code - In current use, and will be used for future versions/development Support libraries - For both legacy and current code (Some of the legacy libraries are no longer available as well) We would like to migrate this to a newer versioning system as we will be adding more developers, and expanding the reach to include remote programmers. When migrating, how do you structure it? Do you just perform a dump of all the data and then import it into the new system, or do you segregate according to type before you bring it into the new system? Do you set up a separate area for libraries, or keep them with the relevant packages? Do you separate by language, system, both? A general outline and methodology is fine, it doesn't need to be broken down to individual program level.

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >