Search Results

Search found 7323 results on 293 pages for 'cache manifest'.

Page 245/293 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • Deploying Oracle ADF Essentials Applications to Glassfish

    - by Shay Shmeltzer
    With the new Oracle ADF Essentials offering you can now deploy applications that leverage Oracle ADF on the open source Glassfish 3.1 server. Deployment is documented in the official JDeveloper and ADF documentation (here) but below is a summary of the steps and a video of the steps you'll need to take to get a basic Oracle ADF Essentials application to work on GlassFish. Note - to make starting/stopping GlassFish easier for my demo I used my GlassFish extension that you can get here. First we'll install some ADF Runtime libraries on GlassFish Download and install Glassfish (Note - if you also have an Oracle DB on the same machine, you'll want to switch GlassFish's HTTP port to something else instead of 8080). Download the Oracle ADF Essentials packaging - this will get you an adf_essentials.zip file. Copy the adf_essentials.zip to the lib directory of your Glassfish domain - on a default windows install this would be: C:\glassfish3\glassfish\domains\domain1\lib Go the the above lib directory and issue a unzip -j adf_essentials.zip This will extract the ADF libraries to the directory. Now you can start the Glassfish server. Now let's configure Glassfish to handle applications of the ADF type: Invoke the admin console of glassfish (http://localhost:4848) and log into your admin account. Go to Configurations->Server-config->JVM Settings and choose the JVM Options tab Add the following entries: -XX:MaxPermSize=512m (note this entry should already exist so just make sure it has a big enough value) -Doracle.mds.cache=simple While we are in the admin console, we can also define JDBC connections that will be used by our application. Go into Resources->JDBC->JDBC Connection Pools and click to create a New one Give it a name and choose the resource type to be javax.sql.XADataSource and choose Oracle as the Database Driver vendor. Click Next Scroll down to the Additional Properties section and start filling in the information for your database. The values for an Oracle XE will be (user=hr, databaseName = XE, Password=hr, ServerName=localhost, DriverType=thin, PortNumber=1521) Click Finish Click Ping to check your connection works. Now define a new JDBC Resource that will use the pool you just defined. In my example I called the resource jdbc/HRDS You will need this name to match the name in your Application Module connection configuraiton.Now you can re-start the Glassfish server for the changes to take effect. Get an ADF application going (you can use the regular Fusion Application template for this) Go into the project properties of your viewController project, under the deployment section click to edit the deployment profile that is defined there. Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. Go to Application -> Application Properties-> Deployment Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. This step will make sure that JDeveloper will autoamtically add the necessary ADF libraries to the EAR file that is being generated for deployment on Glassfish  Go to your Application->Deploy and deploy either to an EAR file or directly to a Glassfish server connection that you created. Things should just work, but if they don't then look up the server.log in the log directory and check out what error is in there. Here is a video demo of the various steps: Note - right now the deployment of an ADF application takes about 2 minutes on my machine we are hoping to be able to improve this timing in the future. People who are more familiar with Glassfish might want to explore using exploded directory deployment and see if they can get it to work.

    Read the article

  • Notes on implementing Visual Studio 2010 Navigate To

    - by cyberycon
    One of the many neat functions added to Visual Studio in VS 2010 was the Navigate To feature. You can find it by clicking Edit, Navigate To, or by using the keyboard shortcut Ctrl, (yes, that's control plus the comma key). This pops up the Navigate To dialog that looks like this: As you type, Navigate To starts searching through a number of different search providers for your term. The entries in the list change as you type, with most providers doing some kind of fuzzy or at least substring matching. If you have C#, C++ or Visual Basic projects in your solution, all symbols defined in those projects are searched. There's also a file search provider, which displays all matching filenames from projects in the current solution as well. And, if you have a Visual Studio package of your own, you can implement a provider too. Micro Focus (where I work) provide the Visual COBOL language inside Visual Studio (http://visualstudiogallery.msdn.microsoft.com/ef9bc810-c133-4581-9429-b01420a9ea40 ), and we wanted to provide this functionality too. This post provides some notes on the things I discovered mainly through trial and error, but also with some kind help from devs inside Microsoft. The expectation of Navigate To is that it searches across the whole solution, not just the current project. So in our case, we wanted to search for all COBOL symbols inside all of our Visual COBOL projects inside the solution. So first of all, here's the Microsoft documentation on Navigate To: http://msdn.microsoft.com/en-us/library/ee844862.aspx . It's the reference information on the Microsoft.VisualStudio.Language.NavigateTo.Interfaces Namespace, and it lists all the interfaces you will need to implement to create your own Navigate To provider. Navigate To uses Visual Studio's latest mechanism for integrating external functionality and services, Managed Extensibility Framework (MEF). MEF components don't require any registration with COM or any other registry entries to be found by Visual Studio. Visual Studio looks in several well-known locations for manifest files (extension.vsixmanifest). It then uses reflection to scan for MEF attributes on classes in the assembly to determine which functionality the assembly provides. MEF itself is actually part of the .NET framework, and you can learn more about it here: http://mef.codeplex.com/. To get started with Visual Studio and MEF you could do worse than look at some of the editor examples on the VSX page http://archive.msdn.microsoft.com/vsx . I've also written a small application to help with switching between development and production MEF assemblies, which you can find on Codeproject: http://www.codeproject.com/KB/miscctrl/MEF_Switch.aspx. The Navigate To interfaces Back to Navigate To, and summarizing the MSDN reference documentation, you need to implement the following interfaces: INavigateToItemProviderFactoryThis is Visual Studio's entry point to your Navigate To implementation, and you must decorate your implementation with the following MEF export attribute: [Export(typeof(INavigateToItemProviderFactory))]  INavigateToItemProvider Your INavigateToItemProviderFactory needs to return your implementation of INavigateToItemProvider. This class implements StartSearch() and StopSearch(). StartSearch() is the guts of your provider, and we'll come back to it in a minute. This object also needs to implement IDisposeable(). INavigateToItemDisplayFactory Your INavigateToItemProvider hands back NavigateToItems to the NavigateTo framework. But to give you good control over what appears in the NavigateTo dialog box, these items will be handed back to your INavigateToItemDisplayFactory, which must create objects implementing INavigateToItemDisplay  INavigateToItemDisplay Each of these objects represents one result in the Navigate To dialog box. As well as providing the description and name of the item, this object also has a NavigateTo() method that should be capable of displaying the item in an editor when invoked. Carrying out the search The lifecycle of your INavigateToItemProvider is the same as that of the Navigate To dialog. This dialog is modal, which makes your implementation a little easier because you know that the user can't be changing things in editors and the IDE while this dialog is up. But the Navigate To dialog DOES NOT run on the main UI thread of the IDE – so you need to be aware of that if you want to interact with editors or other parts of the IDE UI. When the user invokes the Navigate To dialog, your INavigateToItemProvider gets sent a TryCreateNavigateToItemProvider() message. Instantiate your INavigateToItemProvider and hand this back. The sequence diagram below shows what happens next. Your INavigateToItemProvider will get called with StartSearch(), and passed an INavigateToCallback. StartSearch() is an asynchronous request – you must return from this method as soon as possible, and conduct your search on a separate thread. For each match to the search term, instantiate a NavigateToItem object and send it to INavigateToCallback.AddItem(). But as the user types in the Search Terms field, NavigateTo will invoke your StartSearch() method repeatedly with the changing search term. When you receive the next StartSearch() message, you have to abandon your current search, and start a new one. You can't rely on receiving a StopSearch() message every time. Finally, when the Navigate To dialog box is closed by the user, you will get a Dispose() message – that's your cue to abandon any uncompleted searches, and dispose any resources you might be using as part of your search. While you conduct your search invoke INavigateToCallback.ReportProgress() occasionally to provide feedback about how close you are to completing the search. There does not appear to be any particular requirement to how often you invoke ReportProgress(), and you report your progress as the ratio of two integers. In my implementation I report progress in terms of the number of symbols I've searched over the total number of symbols in my dictionary, and send a progress report every 16 symbols. Displaying the Results The Navigate to framework invokes INavigateToItemDisplayProvider.CreateItemDisplay() once for each result you passed to the INavigateToCallback. CreateItemDisplay() is passed the NavigateToItem you handed to the callback, and must return an INavigateToItemDisplay object. NavigateToItem is a sealed class which has a few properties, including the name of the symbol. It also has a Tag property, of type object. This enables you to stash away all the information you will need to create your INavigateToItemDisplay, which must implement an INavigateTo() method to display a symbol in an editor IDE when the user double-clicks an entry in the Navigate To dialog box. Since the tag is of type object, it is up to you, the implementor, to decide what kind of object you store in here, and how it enables the retrieval of other information which is not included in the NavigateToItem properties. Some of the INavigateToItemDisplay properties are self-explanatory, but a couple of them are less obvious: Additional informationThe string you return here is displayed inside brackets on the same line as the Name property. In English locales, Visual Studio includes the preposition "of". If you look at the first line in the Navigate To screenshot at the top of this article, Book_WebRole.Default is the additional information for textBookAuthor, and is the namespace qualified type name the symbol appears in. For procedural COBOL code we display the Program Id as the additional information DescriptionItemsYou can use this property to return any textual description you want about the item currently selected. You return a collection of DescriptionItem objects, each of which has a category and description collection of DescriptionRun objects. A DescriptionRun enables you to specify some text, and optional formatting, so you have some control over the appearance of the displayed text. The DescriptionItems property is displayed at the bottom of the Navigate To dialog box, with the Categories on the left and the Descriptions on the right. The Visual COBOL implementation uses it to display more information about the location of an item, making it easier for the user to know disambiguate duplicate names (something there can be a lot of in large COBOL applications). Summary I hope this article is useful for anyone implementing Navigate To. It is a fantastic navigation feature that Microsoft have added to Visual Studio, but at the moment there still don't seem to be any examples on how to implement it, and the reference information on MSDN is a little brief for anyone attempting an implementation.

    Read the article

  • PHP-FPM stops responding and dies [migrated]

    - by user12361
    I'm running Drupal 6 with Nginx 1.5.1 and PHP-FPM (PHP 5.3.26) on a 1GB single core VPS with 3GB of swap space on SSD storage. I just switched from shared hosting to this unmanaged VPS because my site was getting too heavy, so I'm still learning the ropes. I have moderately high traffic, I don't really monitor it closely but Google Adsense usually record close to 30K page views/day. I usually have 50 to 80 authenticated users logged in and a few hundred more anonymous users hitting the Boost static HTML cache at any given moment. The problem I'm having is that PHP-FPM frequently stops responding, resulting in Nginx 502 or 504 errors. I swear I have read every page on the internet about this issue, which seems fairly common, and I've tried endless combinations of configurations, and I can't find a good solution. After restarting Nginx and PHP-FPM, the site runs really fast for a while, and then without warning it simply stops responding. I get a white screen while the browser waits on the server, and after about 30 seconds to a minute it throws an Nginx 502 or 504 error. Sometimes it runs well for 2 minutes, sometimes 5 minutes, sometimes 5 hours, but it always ends up hanging. When I find the server in this state, there is still plenty of free memory (500MB or more) and no major CPU usage, the control and worker PHP-FPM processes are still present, and the server is still pingable and usable via SSH. A reload of PHP-FPM via the init script revives it again. The hangups don't seem to correspond to the amount of traffic, because I observed this behavior consistently when I was testing this configuration on a development VPS with no traffic at all. I've been constantly tweaking the settings, but I can't definitively eliminate the problem. I set Nginx workers to just 1. In the PHP-FPM config I have tried all three of the process managers. "Dynamic" is definitely the least reliable, consistently hanging up after only a few minutes. "Static" also has been unreliable and unpredictable. The least buggy has been "ondemand", but even that is failing me, sometimes after as much as 12 to 24 hours. But I can't leave the server unattended because PHP-FPM dies and never comes back on its own. I tried adjusting the pm.max_children value from as low as 3 to as high as 50, doesn't make a lot of difference, but I currently have it at 10. Same thing for the spare servers values. I also have set pm.max_requests anywhere from 30 to unlimited, and it doesn't seem to make a difference. According to the logs, the PHP-FPM processes are not exiting with SIGSEGV or SIGBUS, but rather with SIGTERM. I get a lot of lines like: WARNING: [pool www] child 3739, script '/var/www/drupal6/index.php' (request: "GET /index.php") execution timed out (38.739494 sec), terminating and: WARNING: [pool www] child 3738 exited on signal 15 (SIGTERM) after 50.004380 seconds from start I actually found several articles that recommend doing a graceful reload of PHP-FPM via cron every few minutes or hours to circumvent this issue. So that's what I did, "/etc/init.d/php-fpm reload" every 5 minutes. So far, it's keeping the lights on. But it feels like a dreadful hack. Is PHP-FPM really that unreliable? Is there anything else I can do? Thanks a lot!

    Read the article

  • Patterns for a tree of persistent data with multiple storage options?

    - by Robin Winslow
    I have a real-world problem which I'll try to abstract into an illustrative example. So imagine I have data objects in a tree, where parent objects can access children, and children can access parents: // Interfaces interface IParent<TChild> { List<TChild> Children; } interface IChild<TParent> { TParent Parent; } // Classes class Top : IParent<Middle> {} class Middle : IParent<Bottom>, IChild<Top> {} class Bottom : IChild<Middle> {} // Usage var top = new Top(); var middles = top.Children; // List<Middle> foreach (var middle in middles) { var bottoms = middle.Children; // List<Bottom> foreach (var bottom in bottoms) { var middle = bottom.Parent; // Access the parent var top = middle.Parent; // Access the grandparent } } All three data objects have properties that are persisted in two data stores (e.g. a database and a web service), and they need to reflect and synchronise with the stores. Some objects only request from the web service, some only write to it. Data Mapper My favourite pattern for data access is Data Mapper, because it completely separates the data objects themselves from the communication with the data store: class TopMapper { public Top FetchById(int id) { var top = new Top(DataStore.TopDataById(id)); top.Children = MiddleMapper.FetchForTop(Top); return Top; } } class MiddleMapper { public Middle FetchById(int id) { var middle = new Middle(DataStore.MiddleDataById(id)); middle.Parent = TopMapper.FetchForMiddle(middle); middle.Children = BottomMapper.FetchForMiddle(bottom); return middle; } } This way I can have one mapper per data store, and build the object from the mapper I want, and then save it back using the mapper I want. There is a circular reference here, but I guess that's not a problem because most languages can just store memory references to the objects, so there won't actually be infinite data. The problem with this is that every time I want to construct a new Top, Middle or Bottom, it needs to build the entire object tree within that object's Parent or Children property, with all the data store requests and memory usage that that entails. And in real life my tree is much bigger than the one represented here, so that's a problem. Requests in the object In this the objects request their Parents and Children themselves: class Middle { private List<Bottom> _children = null; // cache public List<Bottom> Children { get { _children = _children ?? BottomMapper.FetchForMiddle(this); return _children; } set { BottomMapper.UpdateForMiddle(this, value); _children = value; } } } I think this is an example of the repository pattern. Is that correct? This solution seems neat - the data only gets requested from the data store when you need it, and thereafter it's stored in the object if you want to request it again, avoiding a further request. However, I have two different data sources. There's a database, but there's also a web service, and I need to be able to create an object from the web service and save it back to the database and then request it again from the database and update the web service. This also makes me uneasy because the data objects themselves are no longer ignorant of the data source. We've introduced a new dependency, not to mention a circular dependency, making it harder to test. And the objects now mask their communication with the database. Other solutions Are there any other solutions which could take care of the multiple stores problem but also mean that I don't need to build / request all the data every time?

    Read the article

  • MySQL Server 5.6 default my.cnf and my.ini

    - by user12626240
    We've introduced a default my.cnf / my.ini file for MySQL Server that you can now see in the 5.6.8 release candidate: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M   # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin   # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... # port = ..... # socket = ..... # server_id = .....   # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M   sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES    There is also a template file called my-default.cnf or my-default.ini that has these lines near the start: # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL.   On Linux systems, the mysql_install_db command will copy the template file to the final location, where the server will read and use the file, removing the extra three lines. On Windows, the installer will create extra settings based on the answers you gave during installation. Neither will overwrite an existing my.cnf or my.ini file. The only initially active setting here is to change the value of  sql_mode from the server default of NO_ENGINE_SUBSTITUTION to NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES. This strict mode changes warnings for some non-standard behaviour into errors. This can cause applications which rely on the non-standard things, like dates that aren't valid, to lose data. If we had just changed the server default, the new setting would affect all servers that lack an explicit sql_mode setting, including those where strict mode is harmful. So we did it in the default file instead because that will only affect new server installations. You should expect that in our next version after 5.6, the server default will include STRICT_TRANS_TABLES. Our Windows installer and some of our connectors already use STRICT_TRANS_TABLES by default. Strict has been our preferred setting for many years and it is good to see some development platforms are using it. If you need the old behaviour, just remove the STRICT_TRANS_TABLES setting. If you do this, please also ask your application provider to make it unnecessary. They can do that by setting the session sql_mode setting in their own connections, so the rest of the applications using the server don't have to have an undesirable default. We've kept this file as small as possible because we found that our old files were too big and confused people. We've also now removed the old my-huge and related example files. One key part of this is the link to the documentation, where we will provide an introduction to some key settings. We'd like to hear your feedback on settings that will benefit most users or are most important to call out for existing users. Please do that by commenting here or if you prefer by adding comments to this bug report.

    Read the article

  • Entity System with C++ templates

    - by tommaisey
    I've been getting interested in the Entity/Component style of game programming, and I've come up with a design in C++ which I'd like a critique of. I decided to go with a fairly pure Entity system, where entities are simply an ID number. Components are stored in a series of vectors - one for each Component type. However, I didn't want to have to add boilerplate code for every new Component type I added to the game. Nor did I want to use macros to do this, which frankly scare me. So I've come up with a system based on templates and type hinting. But there are some potential issues I'd like to check before I spend ages writing this (I'm a slow coder!) All Components derive from a Component base class. This base class has a protected constructor, that takes a string parameter. When you write a new derived Component class, you must initialise the base with the name of your new class in a string. When you first instantiate a new DerivedComponent, it adds the string to a static hashmap inside Component mapped to a unique integer id. When you subsequently instantiate more Components of the same type, no action is taken. The result (I think) should be a static hashmap with the name of each class derived from Component that you instantiate at least once, mapped to a unique id, which can by obtained with the static method Component::getTypeId ("DerivedComponent"). Phew. The next important part is TypedComponentList<typename PropertyType>. This is basically just a wrapper to an std::vector<typename PropertyType> with some useful methods. It also contains a hashmap of entity ID numbers to slots in the array so we can find Components by their entity owner. Crucially TypedComponentList<> is derived from the non-template class ComponentList. This allows me to maintain a list of pointers to ComponentList in my main ComponentManager, which actually point to TypedComponentLists with different template parameters (sneaky). The Component manager has template functions such as: template <typename ComponentType> void addProperty (ComponentType& component, int componentTypeId, int entityId) and: template <typename ComponentType> TypedComponentList<ComponentType>* getComponentList (int componentTypeId) which deal with casting from ComponentList to the correct TypedComponentList for you. So to get a list of a particular type of Component you call: TypedComponentList<MyComponent>* list = componentManager.getComponentList<MyComponent> (Component::getTypeId("MyComponent")); Which I'll admit looks pretty ugly. Bad points of the design: If a user of the code writes a new Component class but supplies the wrong string to the base constructor, the whole system will fail. Each time a new Component is instantiated, we must check a hashed string to see if that component type has bee instantiated before. Will probably generate a lot of assembly because of the extensive use of templates. I don't know how well the compiler will be able to minimise this. You could consider the whole system a bit complex - perhaps premature optimisation? But I want to use this code again and again, so I want it to be performant. Good points of the design: Components are stored in typed vectors but they can also be found by using their entity owner id as a hash. This means we can iterate them fast, and minimise cache misses, but also skip straight to the component we need if necessary. We can freely add Components of different types to the system without having to add and manage new Component vectors by hand. What do you think? Do the good points outweigh the bad?

    Read the article

  • cannot install firmware-b43-installer

    - by unknown
    output i get when installing the installArchives() failed: Preconfiguring packages ... Preconfiguring packages ... Preconfiguring packages ... Selecting previously deselected package menu. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 216125 files and directories currently installed.) Unpacking menu (from .../menu_2.1.44ubuntu1_i386.deb) ... Selecting previously deselected package wifi-radar. Unpacking wifi-radar (from .../wifi-radar_2.0.s05-1.2_all.deb) ... Processing triggers for man-db ... Processing triggers for install-info ... Processing triggers for doc-base ... Processing 1 added doc-base file(s)... Registering documents with scrollkeeper... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for python-gmenu ... Rebuilding /usr/share/applications/desktop.en_US.UTF8.cache... Processing triggers for python-support ... Setting up firmware-b43-installer (4.150.10.5-5) ... --2012-10-26 08:51:30-- http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2 Resolving mirror2.openwrt.org... 46.4.11.11 Connecting to mirror2.openwrt.org|46.4.11.11|:80... failed: Connection refused. dpkg: error processing firmware-b43-installer (--configure): subprocess installed post-installation script returned error exit status 4 No apport report written because MaxReports is reached already Setting up menu (2.1.44ubuntu1) ... Processing triggers for menu ... Setting up wifi-radar (2.0.s05-1.2) ... Processing triggers for menu ... Errors were encountered while processing: firmware-b43-installer Setting up firmware-b43-installer (4.150.10.5-5) ... --2012-10-26 08:51:33-- http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2 Resolving mirror2.openwrt.org... 46.4.11.11 Connecting to mirror2.openwrt.org|46.4.11.11|:80... failed: Connection refused. dpkg: error processing firmware-b43-installer (--configure): subprocess installed post-installation script returned error exit status 4 same thing occurs when i try to install any of the wireless application. All other software installs and the same error when trying to install firmware. I tried to go the link(http://mirror2.openwrt.org/sources/broadcom-wl-4.150.10.5.tar.bz2) and download the package but found no make file found in his package. please help me.

    Read the article

  • ????????????????

    - by Yusuke.Yamamoto
    ????????? ????????? ????????? ????·???? ?????:Oracle???????·??????(????)??:?????????????? Pickup!:IT???????????!|????????:100???????!|???????????? ????????:?Oracle DB?????????????????????Windows?VMware?? ?????:Oracle OpenWorld Tokyo 2012|JavaOne Tokyo 2012|??????:151 ?OTN ???? ?????? ????????:???????100???????! ?????????????????Amazon????Get! ???? Oracle Technology Network, ????/????, ??IT???????·?????????????? ???????/???MySQL?????? ?????? ???????/???MySQL????? ?????????Oracle VM VirtualBox????Oracle Real Application Clusters (RAC) 11g Release 2????? ???????/??????????????????????????????? ???????/?????????????/????·????????Flashback Database with SSD? ????? ????? Oracle?????????????????????????!???????????? Oracle Enterprise Manager 12c(EM12c):????????? ~????????~ ???????????!??????·??????? Oracle Linux 5.8?????????? ???DB?????Oracle SQL Developer 3.1???????????? Oracle??????(OUI:Oracle Universal Installer)???? ????? ???? ????????? ????????????·???????? ????????????? ???????? 11gR2 ?????????? Oracle Database 11g R2 Oracle WebLogic Server 11g R1 Oracle Enterprise Manager 11g R1 ????? ????????? Oracle Database ????????????????·?? ???????/???????????????? ?????????(????????, ???, etc) ????????(???, ?????, REDO, ???·????, etc) ????·?????????????????(?????, ???, ??, ??, ??? etc) ????·?????????(??, etc) ????????????? ???????????????????·?????? ??????? ???? ????????·??|SQL Server Windows Server ??????????PL/SQL|Java|.NET|PHP ??/??? ORACLE MASTER ???? DWH(?????????)??·?? ????? ?????(SAN, NAS, SSD, etc) Oracle Database ??????? Amazon EC2 Microsoft Excel Microsoft Visual Studio MSFC/MSCS(Microsoft Cluster Service) SAP ??·??????? Oracle Database Oracle Database 11g Release 2(11gR2) Oracle Database Standard Edition ????????: Advanced Compression ?????????: Advanced Security Application Express(APEX) Automatic Storage Management(ASM) SSD???Oracle???: Database Smart Flash Cache ??????????: Data Guard/Active Data Guard Data Pump Oracle Data Provider for .NET(ODP.NET) ????: Oracle Text Partitioning(???????/?????????) DB????: Real Application Clusters(RAC) Real Application Testing Recovery Manager(RMAN) SQL*Loader|SQL*Plus|Statspack ??????|????????|???????? Database Appliance Database Firewall Exadata Database Machine SQL Developer ?????DB: TimesTen In-Memory Database Oracle Fusion Middleware Java Oracle Coherence Oracle Data Integrator(ODI) Oracle GoldenGate Oracle JRockit JVM Oracle WebLogic Server Oracle Enterprise Manager for Database|for Middleware ????????????: Oracle Application Testing Suite Oracle Linux Oracle Solaris DTrace|ZFS|???/???? Oracle VM Server for x86 ?????? ???????? ?????????Oracle???????????????·????????????????? ?????????(??·??????) OTN??????(??????) ???????(????????) Oracle University(??) ??????! ?????... ????? ?????? ????? ?????? ?????|?Sun?? ???????? OTN???????? OTN(????) ?????? ???? OTN???|???? OTN?????? ??????? ?????? ???????? ???? ???????

    Read the article

  • SSH / SFTP connection issue using Tamir.SharpSsh

    - by jinsungy
    This is my code to connect and send a file to a remote SFTP server. public static void SendDocument(string fileName, string host, string remoteFile, string user, string password) { Scp scp = new Scp(); scp.OnConnecting += new FileTansferEvent(scp_OnConnecting); scp.OnStart += new FileTansferEvent(scp_OnProgress); scp.OnEnd += new FileTansferEvent(scp_OnEnd); scp.OnProgress += new FileTansferEvent(scp_OnProgress); try { scp.To(fileName, host, remoteFile, user, password); } catch (Exception e) { throw e; } } I can successfully connect, send and receive files using CoreFTP. Thus, the issue is not with the server. When I run the above code, the process seems to stop at the scp.To method. It just hangs indefinitely. Anyone know what might my problem be? Maybe it has something to do with adding the key to the a SSH Cache? If so, how would I go about this? EDIT: I inspected the packets using wireshark and discovered that my computer is not executing the Diffie-Hellman Key Exchange Init. This must be the issue. EDIT: I ended up using the following code. Note, the StrictHostKeyChecking was turned off to make things easier. JSch jsch = new JSch(); jsch.setKnownHosts(host); Session session = jsch.getSession(user, host, 22); session.setPassword(password); System.Collections.Hashtable hashConfig = new System.Collections.Hashtable(); hashConfig.Add("StrictHostKeyChecking", "no"); session.setConfig(hashConfig); try { session.connect(); Channel channel = session.openChannel("sftp"); channel.connect(); ChannelSftp c = (ChannelSftp)channel; c.put(fileName, remoteFile); c.exit(); } catch (Exception e) { throw e; } Thanks.

    Read the article

  • Flash AS3: (VideoEvent.COMPLETE, completePlay) - listener is triggered before video is completed

    - by Tevi
    Hello, I have a flash video using the standard FLV Playback component that comes with Flash. I'm using ActionScript 3 to modify the appearance and set up an event listener. I've set it up to go to a new URL using "externalInterface" when the video completes play. The URL is set in a variable using SWFObject. On only a few instances (3 people out of 50 - tested using Amazon Turk), people reported being taken directly to the new url, before the video even started playing. It's difficult to repeat the issue, but it did happen to me once. It doesn't have anything to do with cache, since it has been reported on people going to the url for the first time. Here's the url to the video: http://www.partstown.com/is-bin/INTERSHOP.enfinity/WFS/Reedy-PartsTown-Site/en_US/-/USD/ViewStaticPage-UnFramed?page=tourthetown Here's the code: import flash.external.*; import fl.video.*; var myVideo:FLVPlayback = new FLVPlayback(); var theUrl:String = this.loaderInfo.parameters.urlName; var theScript:String = this.loaderInfo.parameters.scriptName; myVideo.source = this.loaderInfo.parameters.videoPath;//"partstown.flv"; myVideo.skin = this.loaderInfo.parameters.skinPath;//"SkinUnderPlayStopSeekMuteVol.swf" myVideo.skinBackgroundColor = 0xAEBEFB; myVideo.skinBackgroundAlpha = 0.5; myVideo.width = 939; myVideo.height = 660; myVideo.addEventListener(VideoEvent.COMPLETE, completePlay); function completePlay(e:VideoEvent):void { myVideo.alpha=0.2; ExternalInterface.call(theScript); } addChild(myVideo); Why would the listener be triggered before the event complete? How can I fix it? Thanks!

    Read the article

  • BinaryWrite exception "OutputStream is not available when a custom TextWriter is used" in MVC 2 ASP.

    - by Grant
    Hi, I have a view rendering a stream using the response BinaryWrite method. This all worked fine under ASP.NET 4 using the Beta 2 but throws this exception in the RC release: "HttpException" , "OutputStream is not available when a custom TextWriter is used." <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage" %> <%@ Import Namespace="System.IO" %> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { if (ViewData["Error"] == null) { Response.Buffer = true; Response.Clear(); Response.ContentType = ViewData["DocType"] as string; Response.AddHeader("content-disposition", ViewData["Disposition"] as string); Response.CacheControl = "No-cache"; MemoryStream stream = ViewData["DocAsStream"] as MemoryStream; Response.BinaryWrite(stream.ToArray()); Response.Flush(); Response.Close(); } } </script> </script> The view is generated from a client side redirect (jquery replace location call in the previous page using Url.Action helper to render the link of course). This is all in an iframe. Anyone have an idea why this occurs?

    Read the article

  • Problems with repositories on CentOS 3.9

    - by rodnower
    Hello, I have CentOS 3.9 for i386. When I try to instal some thing with yum, i.e: yum install firefox or yum install firefox* or yum list firefox and so on, I get: +++++++++++++++++++ yum info firefox Gathering header information file(s) from server(s) Server: CentOS-3 - Addons Server: CentOS-3 - Base Server: CentOS-3 - Extras Server: CentOS-3 - Updates Server: Jason's Utter Ramblings Repo Finding updated packages Downloading needed headers Looking in Available Packages: Looking in Installed Packages: +++++++++++++++++++ Some time ago I had CentOS 5, and I had the similar problem (exept of firefox all other packages were not installed) and I spent very much time to find different repositories and so on. Now I have CentOS 3, and there is nothing I can install with yum. This is yum.conf content: +++++++++++++++++++ [main] cachedir=/var/cache/yum debuglevel=2 logfile=/var/log/yum.log pkgpolicy=newest distroverpkg=redhat-release installonlypkgs=kernel kernel-smp kernel-hugemem kernel-enterprise kernel-debug kernel-unsupported kernel-smp-unsupported kernel-hugemem-unsupported tolerant=1 exactarch=1 [utterramblings] name=Jason's Utter Ramblings Repo baseurl=http://www.jasonlitka.com/media/EL4/i386/ [base] name=CentOS-$releasever - Base baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/ #released updates [update] name=CentOS-$releasever - Updates baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/ #packages used/produced in the build but not released [addons] name=CentOS-$releasever - Addons baseurl=http://mirror.centos.org/centos/$releasever/addons/$basearch/ #additional packages that may be useful [extras] name=CentOS-$releasever - Extras baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/ #[centosplus] #name=CentOS-$releasever - Plus #baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/ #[testing] #name=CentOS-$releasever - Testing #baseurl=http://mirror.centos.org/centos/$releasever/testing/$basearch/ #[fasttrack] #name=CentOS-$releasever - Fasttrack #baseurl=http://mirror.centos.org/centos/$releasever/fasttrack/$basearch/ +++++++++++++++++++ The file is too long, so I littely edited it. So my question is: is there some "normal" one repository that have all basic thing like firefox and so that I will insert to this file and all will work fine? Thank you very much for ahead.

    Read the article

  • Silverlight 4 Overriding the DataForm Autogenerate to insert Combo Boxes bound to Converters.

    - by kmacmahon
    I've been working towards a solution for some time and could use a little help. I know I've seen an example of this before, but tonight I cannot find anything close to what I need. I have a service that provides me all my DropDownLists, either from Cache or from the DomainService. They are presented as IEnumerable, and are requested from the a repository with GetLookup(LookupId). I have created a custom attribute that I have decorated my MetaDataClass that looks something like this: [Lookup(Lookup.Products)] public Guid ProductId I have created a custom Data Form that is set to AutoGenerateFields and I am intercepting the autogenerate fields. I am checking for my CustomAttribute and that works. Given this code in my CustomDataForm (standard comments removed for brevity), what is the next step to override the field generation and place a bound combobox in its place? public class CustomDataForm : DataForm { private Dictionary<string, DataField> fields = new Dictionary<string, DataField>(); public Dictionary<string, DataField> Fields { get { return this.fields; } } protected override void OnAutoGeneratingField(DataFormAutoGeneratingFieldEventArgs e) { PropertyInfo propertyInfo = this.CurrentItem.GetType().GetProperty(e.PropertyName); foreach (Attribute attribute in propertyInfo.GetCustomAttributes(true)) { LookupFieldAttribute lookupFieldAttribute = attribute as LookupFieldAttribute; if (lookupFieldAttribute != null) { // Create a combo box. // Bind it to my Lookup IEnumerable // Set the selected item to my Field's Value // Set the binding two way } } this.fields[e.PropertyName] = e.Field; base.OnAutoGeneratingField(e); } } Any cited working examples for SL4/VS2010 would be appreciated. Thanks

    Read the article

  • TF203015 The Item $/path/file has an incompatible pending change. While trying to unshelve.

    - by drachenstern
    I'm using Visual Studio 2010 Pro against Team Server 2010 and I had my project opened (apparently) as a solution from the repo, but I should've opened it as "web site". I found this out during compile, so I went to shelve my new changes and deleted the project from my local disk, then opened the project again from source (this time as web site) and now I can't unshelve my files. Is there any way to work around this? Did I blow something up? Do I need to do maintenance at the server? I found this question on SO #2332685 but I don't know what cache files he's talking about (I'm on XP :\ ) EDIT: Found this link after posting the question, sorry for the delay in researching, still didn't fix my problem Of course I can't find an error code for TF203015 anywhere, so no resolution either (hence my inclusion of the number in the title, yeah?) EDIT: I should probably mention that these files were never checked in in the first place. Does that matter? Can you shelve an unchecked item? Is that what I did wrong? EDIT: WHAP - FOUND IT!!! Use "Undo" on the items that don't exist because they show up in pending changes as checkins.

    Read the article

  • Greasemonkey @require jQuery not working "Component not available"

    - by Greg K
    I've seen the other question on here about loading jQuery in a Greasemonkey. Having tried that method, with this require statement inside my ==UserScript== tags: // @require http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js I still get the following error message in Firefox's error console: Error: Component is not available Source File: file:///Users/greg/Library/Application%20Support/ Firefox/Profiles/xo9xhovo.default/gm_scripts/myscript/jquerymin.js Line: 36 This stops my greasemonkey code from running. I've made sure I included the @require for jQuery and saved my js file before installing it, as required files are only loaded on installation. Code: // ==UserScript== // @name My Script // @namespace http://www.google.com // @description My test script // @include http://www.google.com // @require http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js // ==/UserScript== GM_log("Hello"); I have Greasemonkey 0.8.20091209.4 installed on Firefox 3.5.7 on my Macbook Pro, Leopard (10.5.8). I've cleared my cache (except cookies) and have disabled all other plugins except Flashblock 1.5.11.2, Web Developer 1.1.8 and Adblock Plus 1.1.3. My config.xml with my Greasemonkey script installed: <UserScriptConfig> <Script filename="myscript.user.js" name="My Script" namespace="http://www.google.com" description="My test script" enabled="true" basedir="myscript"> <Include>http://www.google.com</Include> <Require filename="jquerymin.js"/> </Script> I can see jquerymin.js sat in the gm_scripts/myscript/ directory. Additionally, is it common for this error to occur in the console when installing a Greasemonkey script? Error: not well-formed Source File: file:///Users/Greg/Documents/myscript.user.js Line: 1, Column: 1 Source Code: // ==UserScript==

    Read the article

  • Iphone page curl effect

    - by dragon
    I am using this code for Page curl effect ....Its work fine in simulator and device... But its not (setType:@"pageCurl") apple documented api , this caused it to be rejected by the iPhone Developer Program during the App Store review process: animation = [CATransition animation]; [animation setDelegate:self]; [animation setDuration:1.0f]; animation.startProgress = 0.5; animation.endProgress = 1; [animation setTimingFunction:UIViewAnimationCurveEaseInOut]; [animation setType:@"pageCurl"]; [animation setSubtype:@"fromRight"]; [animation setRemovedOnCompletion:NO]; [animation setFillMode: @"extended"]; [animation setRemovedOnCompletion: NO]; [[imageView layer] addAnimation:animation forKey:@"pageFlipAnimation"]; So i changed and using like this [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:1]; [UIView setAnimationDelegate:self]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationCurve:UIViewAnimationCurveLinear]; [UIView setAnimationWillStartSelector:@selector(transitionWillStart:finished:context:)]; [UIView setAnimationDidStopSelector:@selector(transitionDidStop:finished:context:)]; // other animation properties [UIView setAnimationTransition:UIViewAnimationTransitionCurlUp forView:imageView cache:YES]; // set view properties [UIView commitAnimations]; In this above code i want to stop the page curl effect at midway.. But i cant stop it in midway like map applications in ipod... Is this any fix for this? or Is there any apple documented methods used for page curl effect in ipod touch? I am searching lot. but didnt get any answer ? can anyone help me? Thanks in advance..plz

    Read the article

  • "Message":"Invalid JSON primitive: RecordId."

    - by Radhi
    getting error in ajax call from jquery. here is my jquery function function AddAlbumToMyProfile(AlbumId, AlbumName, AlbumTypeName) { var obj = { AlbumId: AlbumId, AlbumName: AlbumName, AlbumTypeName: AlbumTypeName }; //following is ASP.NET AJAX serialize function to convert //object into jSON. var json = Sys.Serialization.JavaScriptSerializer.serialize(obj); $.ajax({ type: "POST", url: "Gallary.aspx/AddAlbumToMyProfile", data: json, contentType: "application/json; charset=utf-8", dataType: "json", async: true, cache: false, success: function(msg) { if (msg.d == '') { alert("Album Added to your profile"); } else { alert(msg.d); } }, error: function(XMLHttpRequest, textStatus, errorThrown) { } }); } and this is my webmethod [WebMethod] public static string DeleteRecord(Int64 RecordId, Int64 UserId, Int64 UserProfileId, string ItemType, string FileName) { try { string FilePath = HttpContext.Current.Server.MapPath(FileName); XDocument xmldoc = XDocument.Load(FilePath); XElement Xelm = xmldoc.Element("UserProfile"); XElement parentElement = Xelm.XPathSelectElement(ItemType + "/Fields"); (from BO in parentElement.Descendants("Record") where BO.Element("Id").Attribute("value").Value == RecordId.ToString() select BO).Remove(); XDocument xdoc = XDocument.Parse(Xelm.ToString(), LoadOptions.PreserveWhitespace); xdoc.Save(FilePath); UserInfoHandler obj = new UserInfoHandler(); return obj.GetHTML(UserId, UserProfileId, FileName, ItemType, RecordId, Xelm).ToString(); } catch (Exception ex) { HandleException.LogError(ex, "EditUserProfile.aspx", "DeleteRecord"); } return "success"; } can anybody please tell me whats wrong in my code?? i am getting error: {"Message":"Invalid JSON primitive: RecordId.","StackTrace":" at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializePrimitiveObject()\r\n at System.Web.Script.Serialization.JavaScriptObjectDeserializer.DeserializeInternal(Int32 depth)\r\n at System.Web.Script.Serialization.JavaScriptObjectDeserializer.BasicDeserialize(String input, Int32 depthLimit, JavaScriptSerializer serializer)\r\n at System.Web.Script.Serialization.JavaScriptSerializer.Deserialize(JavaScriptSerializer serializer, String input, Type type, Int32 depthLimit)\r\n at System.Web.Script.Serialization.JavaScriptSerializer.Deserialize[T](String input)\r\n at System.Web.Script.Services.RestHandler.GetRawParamsFromPostRequest(HttpContext context, JavaScriptSerializer serializer)\r\n at System.Web.Script.Services.RestHandler.GetRawParams(WebServiceMethodData methodData, HttpContext context)\r\n at System.Web.Script.Services.RestHandler.ExecuteWebServiceCall(HttpContext context, WebServiceMethodData methodData)","ExceptionType":"System.ArgumentException"}

    Read the article

  • JBoss5: Cannot deploy due to java.util.zip.ZipException: error in opening zip file

    - by Andreas
    I have a web client and a EJB project, which I created with Eclipse 3.4. When I want to deploy it on Jboss 5.0.1, I receive the error below. I searched a lot but I wasn't able to find a solution to this. 18:21:21,899 INFO [ServerImpl] Starting JBoss (Microcontainer)... 18:21:21,900 INFO [ServerImpl] Release ID: JBoss [Morpheus] 5.0.1.GA (build: SVNTag=JBoss_5_0_1_GA date=200902231221) 18:21:21,900 INFO [ServerImpl] Bootstrap URL: null 18:21:21,900 INFO [ServerImpl] Home Dir: /Applications/jboss-5.0.1.GA 18:21:21,900 INFO [ServerImpl] Home URL: file:/Applications/jboss-5.0.1.GA/ 18:21:21,901 INFO [ServerImpl] Library URL: file:/Applications/jboss-5.0.1.GA/lib/ 18:21:21,901 INFO [ServerImpl] Patch URL: null 18:21:21,901 INFO [ServerImpl] Common Base URL: file:/Applications/jboss-5.0.1.GA/common/ 18:21:21,902 INFO [ServerImpl] Common Library URL: file:/Applications/jboss-5.0.1.GA/common/lib/ 18:21:21,902 INFO [ServerImpl] Server Name: default 18:21:21,902 INFO [ServerImpl] Server Base Dir: /Applications/jboss-5.0.1.GA/server 18:21:21,902 INFO [ServerImpl] Server Base URL: file:/Applications/jboss-5.0.1.GA/server/ 18:21:21,902 INFO [ServerImpl] Server Config URL: file:/Applications/jboss-5.0.1.GA/server/default/conf/ 18:21:21,902 INFO [ServerImpl] Server Home Dir: /Applications/jboss-5.0.1.GA/server/default 18:21:21,902 INFO [ServerImpl] Server Home URL: file:/Applications/jboss-5.0.1.GA/server/default/ 18:21:21,903 INFO [ServerImpl] Server Data Dir: /Applications/jboss-5.0.1.GA/server/default/data 18:21:21,903 INFO [ServerImpl] Server Library URL: file:/Applications/jboss-5.0.1.GA/server/default/lib/ 18:21:21,903 INFO [ServerImpl] Server Log Dir: /Applications/jboss-5.0.1.GA/server/default/log 18:21:21,903 INFO [ServerImpl] Server Native Dir: /Applications/jboss-5.0.1.GA/server/default/tmp/native 18:21:21,903 INFO [ServerImpl] Server Temp Dir: /Applications/jboss-5.0.1.GA/server/default/tmp 18:21:21,903 INFO [ServerImpl] Server Temp Deploy Dir: /Applications/jboss-5.0.1.GA/server/default/tmp/deploy 18:21:22,669 INFO [ServerImpl] Starting Microcontainer, bootstrapURL=file:/Applications/jboss-5.0.1.GA/server/default/conf/bootstrap.xml 18:21:23,535 INFO [VFSCacheFactory] Initializing VFSCache [org.jboss.virtual.plugins.cache.CombinedVFSCache] 18:21:23,541 INFO [VFSCacheFactory] Using VFSCache [CombinedVFSCache[real-cache: null]] 18:21:23,942 INFO [CopyMechanism] VFS temp dir: /Applications/jboss-5.0.1.GA/server/default/tmp 18:21:23,943 INFO [ZipEntryContext] VFS force nested jars copy-mode is enabled. 18:21:26,263 INFO [ServerInfo] Java version: 1.5.0_16,Apple Inc. 18:21:26,264 INFO [ServerInfo] Java Runtime: Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_16-b06-284) 18:21:26,264 INFO [ServerInfo] Java VM: Java HotSpot(TM) Server VM 1.5.0_16-133,Apple Inc. 18:21:26,264 INFO [ServerInfo] OS-System: Mac OS X 10.5.6,i386 18:21:26,336 INFO [JMXKernel] Legacy JMX core initialized 18:21:30,432 INFO [ProfileServiceImpl] Loading profile: default from: org.jboss.system.server.profileservice.repository.SerializableDeploymentRepository@e1d5d9(root=/Applications/jboss-5.0.1.GA/server, key=org.jboss.profileservice.spi.ProfileKey@143b82c3[domain=default,server=default,name=default]) 18:21:30,436 INFO [ProfileImpl] Using repository:org.jboss.system.server.profileservice.repository.SerializableDeploymentRepository@e1d5d9(root=/Applications/jboss-5.0.1.GA/server, key=org.jboss.profileservice.spi.ProfileKey@143b82c3[domain=default,server=default,name=default]) 18:21:30,436 INFO [ProfileServiceImpl] Loaded profile: ProfileImpl@ae002e{key=org.jboss.profileservice.spi.ProfileKey@143b82c3[domain=default,server=default,name=default]} 18:21:32,935 INFO [WebService] Using RMI server codebase: http://localhost:8083/ 18:21:42,572 INFO [NativeServerConfig] JBoss Web Services - Stack Native Core 18:21:42,573 INFO [NativeServerConfig] 3.0.5.GA 18:21:52,836 ERROR [AbstractKernelController] Error installing to ClassLoader: name=vfsfile:/Applications/jboss-5.0.1.GA/server/default/deploy/TwitterEAR.ear/ state=Describe mode=Manual requiredState=ClassLoader org.jboss.deployers.spi.DeploymentException: Error creating classloader for vfsfile:/Applications/jboss-5.0.1.GA/server/default/deploy/TwitterEAR.ear/ at org.jboss.deployers.spi.DeploymentException.rethrowAsDeploymentException(DeploymentException.java:49) at org.jboss.deployers.structure.spi.helpers.AbstractDeploymentContext.createClassLoader(AbstractDeploymentContext.java:576) at org.jboss.deployers.structure.spi.helpers.AbstractDeploymentUnit.createClassLoader(AbstractDeploymentUnit.java:159) at org.jboss.deployers.spi.deployer.helpers.AbstractClassLoaderDeployer.deploy(AbstractClassLoaderDeployer.java:53) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1598) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1062) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:698) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.loadProfile(ProfileServiceBootstrap.java:304) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start(ProfileServiceBootstrap.java:205) at org.jboss.bootstrap.AbstractServerImpl.start(AbstractServerImpl.java:405) at org.jboss.Main.boot(Main.java:209) at org.jboss.Main$1.run(Main.java:547) at java.lang.Thread.run(Thread.java:613) Caused by: java.lang.Error: Error visiting FileHandler@5567366[path=TwitterEAR.ear/TwitterPoCEJB.jar context=file:/Applications/jboss-5.0.1.GA/server/default/deploy/ real=file:/Applications/jboss-5.0.1.GA/server/default/deploy/TwitterEAR.ear/TwitterPoCEJB.jar/] at org.jboss.classloading.plugins.vfs.PackageVisitor.determineAllPackages(PackageVisitor.java:98) at org.jboss.deployers.vfs.plugins.classloader.VFSDeploymentClassLoaderPolicyModule.determineCapabilities(VFSDeploymentClassLoaderPolicyModule.java:108) at org.jboss.classloading.spi.dependency.Module.getCapabilities(Module.java:654) at org.jboss.classloading.spi.dependency.Module.determinePackageNames(Module.java:713) at org.jboss.classloading.spi.dependency.Module.getPackageNames(Module.java:698) at org.jboss.deployers.vfs.plugins.classloader.VFSDeploymentClassLoaderPolicyModule.determinePolicy(VFSDeploymentClassLoaderPolicyModule.java:129) at org.jboss.deployers.vfs.plugins.classloader.VFSDeploymentClassLoaderPolicyModule.determinePolicy(VFSDeploymentClassLoaderPolicyModule.java:48) at org.jboss.classloading.spi.dependency.policy.ClassLoaderPolicyModule.getPolicy(ClassLoaderPolicyModule.java:195) at org.jboss.deployers.vfs.plugins.classloader.VFSDeploymentClassLoaderPolicyModule.getPolicy(VFSDeploymentClassLoaderPolicyModule.java:122) at org.jboss.deployers.vfs.plugins.classloader.VFSDeploymentClassLoaderPolicyModule.getPolicy(VFSDeploymentClassLoaderPolicyModule.java:48) at org.jboss.classloading.spi.dependency.policy.ClassLoaderPolicyModule.registerClassLoaderPolicy(ClassLoaderPolicyModule.java:131) at org.jboss.deployers.plugins.classloading.AbstractLevelClassLoaderSystemDeployer.createClassLoader(AbstractLevelClassLoaderSystemDeployer.java:120) at org.jboss.deployers.structure.spi.helpers.AbstractDeploymentContext.createClassLoader(AbstractDeploymentContext.java:562) ... 21 more Caused by: java.lang.RuntimeException: java.util.zip.ZipException: error in opening zip file at org.jboss.virtual.plugins.context.AbstractExceptionHandler.handleZipEntriesInitException(AbstractExceptionHandler.java:39) at org.jboss.virtual.plugins.context.helpers.NamesExceptionHandler.handleZipEntriesInitException(NamesExceptionHandler.java:63) at org.jboss.virtual.plugins.context.zip.ZipEntryContext.ensureEntries(ZipEntryContext.java:610) at org.jboss.virtual.plugins.context.zip.ZipEntryContext.checkIfModified(ZipEntryContext.java:757) at org.jboss.virtual.plugins.context.zip.ZipEntryContext.getChildren(ZipEntryContext.java:829) at org.jboss.virtual.plugins.context.zip.ZipEntryHandler.getChildren(ZipEntryHandler.java:159) at org.jboss.virtual.plugins.context.DelegatingHandler.getChildren(DelegatingHandler.java:121) at org.jboss.virtual.plugins.context.AbstractVFSContext.getChildren(AbstractVFSContext.java:211) at org.jboss.virtual.plugins.context.AbstractVFSContext.visit(AbstractVFSContext.java:328) at org.jboss.virtual.plugins.context.AbstractVFSContext.visit(AbstractVFSContext.java:298) at org.jboss.virtual.VFS.visit(VFS.java:433) at org.jboss.virtual.VirtualFile.visit(VirtualFile.java:437) at org.jboss.virtual.VirtualFile.getChildren(VirtualFile.java:386) at org.jboss.virtual.VirtualFile.getChildren(VirtualFile.java:367) at org.jboss.classloading.plugins.vfs.PackageVisitor.visit(PackageVisitor.java:200) at org.jboss.virtual.plugins.vfs.helpers.WrappingVirtualFileHandlerVisitor.visit(WrappingVirtualFileHandlerVisitor.java:62) at org.jboss.virtual.plugins.context.AbstractVFSContext.visit(AbstractVFSContext.java:353) at org.jboss.virtual.plugins.context.AbstractVFSContext.visit(AbstractVFSContext.java:298) at org.jboss.virtual.VFS.visit(VFS.java:433) at org.jboss.virtual.VirtualFile.visit(VirtualFile.java:437) at org.jboss.classloading.plugins.vfs.PackageVisitor.determineAllPackages(PackageVisitor.java:94) ... 33 more Caused by: java.util.zip.ZipException: error in opening zip file at java.util.zip.ZipFile.open(Native Method) at java.util.zip.ZipFile.<init>(ZipFile.java:203) at java.util.zip.ZipFile.<init>(ZipFile.java:234) at org.jboss.virtual.plugins.context.zip.ZipFileWrapper.ensureZipFile(ZipFileWrapper.java:175) at org.jboss.virtual.plugins.context.zip.ZipFileWrapper.acquire(ZipFileWrapper.java:245) at org.jboss.virtual.plugins.context.zip.ZipEntryContext.initEntries(ZipEntryContext.java:470) at org.jboss.virtual.plugins.context.zip.ZipEntryContext.ensureEntries(ZipEntryContext.java:603) ... 51 more 18:21:56,772 INFO [JMXConnectorServerService] JMX Connector server: service:jmx:rmi://localhost/jndi/rmi://localhost:1090/jmxconnector 18:21:56,959 INFO [MailService] Mail Service bound to java:/Mail 18:21:59,450 WARN [JBossASSecurityMetadataStore] WARNING! POTENTIAL SECURITY RISK. It has been detected that the MessageSucker component which sucks messages from one node to another has not had its password changed from the installation default. Please see the JBoss Messaging user guide for instructions on how to do this. 18:21:59,489 WARN [AnnotationCreator] No ClassLoader provided, using TCCL: org.jboss.managed.api.annotation.ManagementComponent 18:21:59,789 INFO [TransactionManagerService] JBossTS Transaction Service (JTA version) - JBoss Inc. 18:21:59,789 INFO [TransactionManagerService] Setting up property manager MBean and JMX layer 18:22:00,040 INFO [TransactionManagerService] Initializing recovery manager 18:22:00,160 INFO [TransactionManagerService] Recovery manager configured 18:22:00,160 INFO [TransactionManagerService] Binding TransactionManager JNDI Reference 18:22:00,184 INFO [TransactionManagerService] Starting transaction recovery manager 18:22:01,243 INFO [Http11Protocol] Initializing Coyote HTTP/1.1 on http-localhost%2F127.0.0.1-8080 18:22:01,244 INFO [AjpProtocol] Initializing Coyote AJP/1.3 on ajp-localhost%2F127.0.0.1-8009 18:22:01,244 INFO [StandardService] Starting service jboss.web 18:22:01,247 INFO [StandardEngine] Starting Servlet Engine: JBoss Web/2.1.2.GA 18:22:01,336 INFO [Catalina] Server startup in 161 ms 18:22:01,360 INFO [TomcatDeployment] deploy, ctxPath=/invoker 18:22:02,014 INFO [TomcatDeployment] deploy, ctxPath=/web-console 18:22:02,459 INFO [TomcatDeployment] deploy, ctxPath=/jbossws 18:22:02,570 INFO [RARDeployment] Required license terms exist, view vfszip:/Applications/jboss-5.0.1.GA/server/default/deploy/jboss-local-jdbc.rar/META-INF/ra.xml 18:22:02,586 INFO [RARDeployment] Required license terms exist, view vfszip:/Applications/jboss-5.0.1.GA/server/default/deploy/jboss-xa-jdbc.rar/META-INF/ra.xml 18:22:02,645 INFO [RARDeployment] Required license terms exist, view vfszip:/Applications/jboss-5.0.1.GA/server/default/deploy/jms-ra.rar/META-INF/ra.xml 18:22:02,663 INFO [RARDeployment] Required license terms exist, view vfszip:/Applications/jboss-5.0.1.GA/server/default/deploy/mail-ra.rar/META-INF/ra.xml 18:22:02,705 INFO [RARDeployment] Required license terms exist, view vfszip:/Applications/jboss-5.0.1.GA/server/default/deploy/quartz-ra.rar/META-INF/ra.xml 18:22:02,801 INFO [SimpleThreadPool] Job execution threads will use class loader of thread: main 18:22:02,850 INFO [QuartzScheduler] Quartz Scheduler v.1.5.2 created. 18:22:02,857 INFO [RAMJobStore] RAMJobStore initialized. 18:22:02,858 INFO [StdSchedulerFactory] Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties' 18:22:02,858 INFO [StdSchedulerFactory] Quartz scheduler version: 1.5.2 18:22:02,859 INFO [QuartzScheduler] Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started. 18:22:03,888 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jboss.jca:service=DataSourceBinding,name=DefaultDS' to JNDI name 'java:DefaultDS' 18:22:04,530 INFO [ServerPeer] JBoss Messaging 1.4.1.GA server [0] started 18:22:04,624 INFO [QueueService] Queue[/queue/DLQ] started, fullSize=200000, pageSize=2000, downCacheSize=2000 18:22:04,632 WARN [ConnectionFactoryJNDIMapper] supportsFailover attribute is true on connection factory: jboss.messaging.connectionfactory:service=ClusteredConnectionFactory but post office is non clustered. So connection factory will *not* support failover 18:22:04,632 WARN [ConnectionFactoryJNDIMapper] supportsLoadBalancing attribute is true on connection factory: jboss.messaging.connectionfactory:service=ClusteredConnectionFactory but post office is non clustered. So connection factory will *not* support load balancing 18:22:04,742 INFO [ConnectionFactory] Connector bisocket://localhost:4457 has leasing enabled, lease period 10000 milliseconds 18:22:04,742 INFO [ConnectionFactory] org.jboss.jms.server.connectionfactory.ConnectionFactory@6af9ad started 18:22:04,746 INFO [QueueService] Queue[/queue/ExpiryQueue] started, fullSize=200000, pageSize=2000, downCacheSize=2000 18:22:04,747 INFO [ConnectionFactory] Connector bisocket://localhost:4457 has leasing enabled, lease period 10000 milliseconds 18:22:04,747 INFO [ConnectionFactory] org.jboss.jms.server.connectionfactory.ConnectionFactory@5ac953 started 18:22:04,750 INFO [ConnectionFactory] Connector bisocket://localhost:4457 has leasing enabled, lease period 10000 milliseconds 18:22:04,750 INFO [ConnectionFactory] org.jboss.jms.server.connectionfactory.ConnectionFactory@e8fa3a started 18:22:05,050 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jboss.jca:service=ConnectionFactoryBinding,name=JmsXA' to JNDI name 'java:JmsXA' 18:22:05,073 INFO [TomcatDeployment] deploy, ctxPath=/ 18:22:05,178 INFO [TomcatDeployment] deploy, ctxPath=/jmx-console 18:22:05,290 ERROR [ProfileServiceBootstrap] Failed to load profile: Summary of incomplete deployments (SEE PREVIOUS ERRORS FOR DETAILS): DEPLOYMENTS IN ERROR: Deployment "vfsfile:/Applications/jboss-5.0.1.GA/server/default/deploy/TwitterEAR.ear/" is in error due to the following reason(s): java.util.zip.ZipException: error in opening zip file 18:22:05,301 INFO [Http11Protocol] Starting Coyote HTTP/1.1 on http-localhost%2F127.0.0.1-8080 18:22:05,364 INFO [AjpProtocol] Starting Coyote AJP/1.3 on ajp-localhost%2F127.0.0.1-8009 18:22:05,373 INFO [ServerImpl] JBoss (Microcontainer) [5.0.1.GA (build: SVNTag=JBoss_5_0_1_GA date=200902231221)] Started in 43s:467ms The mentioned ear and war file are both in the deploy directory. Does anybody have hints?

    Read the article

  • Is make -j distcc possible to scale over 5 times?

    - by holmes
    Since distcc cannot keep states and just possible to send jobs and headers and let those servers to use only the data just sent and preprocess and compile, I think the lastest distcc has problem in scalability. In my local build environment which has appx. 10,000 c/c++ files to build, I could only make 2 times faster than not using distcc (but using make -j) when having 20 build servers. What do you think is the problem? If anyone has achieved scalability more than 10 - 20 times using make -j and distcc, please let me know. The following product claims that it is impossible to scale make -j and distcc faster than 5 times. http://www.electric-cloud.com/products/electricaccelerator.php I think this can be improved by: Letting the distccd server to maintain sessions Tied to those sessions, they will cache their own header directories Preprocess will be done demand base from the distccd server This will be done through a LD_PRELOADed library libdistcc.so which will replace stat/open syscalls and fetches the header files over network. ... Has anyone done this kind of thing?

    Read the article

  • Apk Expansion Files - Application Licensing - Developer account - NOT_LICENSED response

    - by mUncescu
    I am trying to use the APK Expansion Files extension for Android. I have uploaded the APK to the server along with the extension files. If the application was previously published i get a response from the server saying NOT_LICENSED: The code I use is: APKExpansionPolicy aep = new APKExpansionPolicy(mContext, new AESObfuscator(getSALT(), mContext.getPackageName(), deviceId)); aep.resetPolicy(); LicenseChecker checker = new LicenseChecker(mContext, aep, getPublicKey(); checker.checkAccess(new LicenseCheckerCallback() { @Override public void allow(int reason) { @Override public void dontAllow(int reason) { try { switch (reason) { case Policy.NOT_LICENSED: mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_UNLICENSED); break; case Policy.RETRY: mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_FETCHING_URL); break; } } finally { setServiceRunning(false); } } @Override public void applicationError(int errorCode) { try { mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_FETCHING_URL); } finally { setServiceRunning(false); } } }); So if the application wasn't previously published the Allow method is called. If the application was previously published and now it isn't the dontAllow method is called. I have tried: http://developer.android.com/guide/google/play/licensing/setting-up.html#test-response Here it says that if you use a developer or test account on your test device you can set a specific response, I use LICENSED as response and still get NOT_LINCESED. Resetting the phone, clearing google play store cache, application data. Changing the versioncode number in different combinations still doesn't work. Edit: In case someone else was facing this problem I received an mail from the google support team We are aware that newly created accounts for testing in-app billing and Google licensing server (LVL) return errors, and are working on resolving this problem. Please stay tuned. In the meantime, you can use any accounts created prior to August 1st, 2012, for testing. So it seems to be a problem with their server, if I use the main developer thread everything works fine.

    Read the article

  • How to analyze 'dbcc memorystatus' result in SQL Server 2008

    - by envykok
    Currently i am facing a sql memory pressure issue. i have run 'dbcc memorystatus', here is part of my result: Memory Manager KB VM Reserved 23617160 VM Committed 14818444 Locked Pages Allocated 0 Reserved Memory 1024 Reserved Memory In Use 0 Memory node Id = 0 KB VM Reserved 23613512 VM Committed 14814908 Locked Pages Allocated 0 MultiPage Allocator 387400 SinglePage Allocator 3265000 MEMORYCLERK_SQLBUFFERPOOL (node 0) KB VM Reserved 16809984 VM Committed 14184208 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 0 MultiPage Allocator 408 MEMORYCLERK_SQLCLR (node 0) KB VM Reserved 6311612 VM Committed 141616 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 1456 MultiPage Allocator 20144 CACHESTORE_SQLCP (node 0) KB VM Reserved 0 VM Committed 0 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 3101784 MultiPage Allocator 300328 Buffer Pool Value Committed 1742946 Target 1742946 Database 1333883 Dirty 940 In IO 1 Latched 18 Free 89 Stolen 408974 Reserved 2080 Visible 1742946 Stolen Potential 1579938 Limiting Factor 13 Last OOM Factor 0 Page Life Expectancy 5463 Process/System Counts Value Available Physical Memory 258572288 Available Virtual Memory 8771398631424 Available Paging File 16030617600 Working Set 15225597952 Percent of Committed Memory in WS 100 Page Faults 305556823 System physical memory high 1 System physical memory low 0 Process physical memory low 0 Process virtual memory low 0 Procedure Cache Value TotalProcs 11382 TotalPages 430160 InUsePages 28 Can you lead me to analyze this result ? Is it a lot execute plan have been cached causing the memory issue or other reasons?

    Read the article

  • hibernate validationQuery.. proper way to use it

    - by cometta
    sometime the connection from application to database will drop and i get SQLState: 08006 error Code: 17002. Below is my configuration for database pooling <prop key="hibernate.dialect">org.hibernate.dialect.Oracle10gDialect</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.format_sql">true</prop> <prop key="hibernate.use_sql_comments">true</prop> <prop key="hibernate.cglib.use_reflection_optimizer">true</prop> <!-- true ==start up slower, but performacen on runtime good--> <prop key="hibernate.hbm2ddl.auto">update</prop> <prop key="hibernate.c3p0.min_size">5</prop> <prop key="hibernate.c3p0.max_size">20</prop> <prop key="hibernate.c3p0.timeout">1800</prop> <prop key="hibernate.c3p0.max_statements">100</prop> <!-- c3p0's PreparedStatement cache. Zero means statement caching is turned off. --> Can anyone advise, should i use validationQuery, onBorrow.. and why need to use it? what other properties should i use to enhance stability of the connection from my application to oracle10g?

    Read the article

  • NHibernate, Databinding to DataGridView, Lazy Loading, and Session managment - need advice

    - by Tom Bushell
    My main application form (WinForms) has a DataGridView, that uses DataBinding and Fluent NHibernate to display data from a SQLite database. This form is open for the entire time the application is running. For performance reasons, I set the convention DefaultLazy.Always() for all DB access. So far, the only way I've found to make this work is to keep a Session (let's call it MainSession) open all the time for the main form, so NHibernate can lazy load new data as the user navigates with the grid. Another part of the application can run in the background, and Save to the DB. Currently, (after considerable struggle), my approach is to call MainSession.Disconnect(), create a disposable Session for each Save, and MainSession.Reconnect() after finishing the Save. Otherwise SQLite will throw "The database file is locked" exceptions. This seems to be working well so far, but past experience has made me nervous about keeping a session open for a long time (I ran into performance problems when I tried to use a single session for both Saves and Loads - the cache filled up, and bogged down everything - see http://stackoverflow.com/questions/2526675/commit-is-very-slow-in-my-nhibernate-sqlite-project). So, my question - is this a good approach, or am I looking at problems down the road? If it's a bad approach, what are the alternatives? I've considered opening and closing my main session whenever the user navigates with the grid, but it's not obvious to me how I would do that - hook every event from the grid that could possibly cause a lazy load? I have the nagging feeling that trying to manage my own sessions this way is fundamentally the wrong approach, but it's not obvious what the right one is.

    Read the article

  • How do I add PHP support to Apache 2 without breaking my current installation?

    - by Hobhouse
    I run Apache 2 with WSGI (for a Django-app) on a Ubuntu box. I want to use Nagios for server monitoring, and for this purpose it seems I have to add PHP support to Apache. When I installed Apache 2, I did this: apt-get install apache2 apache2.2-common apache2-mpm-worker apache2-threaded-dev libapache2-mod-wsgi python-dev Available modules for apache2 are these: /etc/apache2/mods-available$ ls actions.conf authn_default.load cache.load deflate.conf filter.load mime.conf proxy_ftp.load suexec.load actions.load authn_file.load cern_meta.load deflate.load headers.load mime.load proxy_http.load unique_id.load alias.conf authnz_ldap.load cgi.load dir.conf ident.load mime_magic.conf rewrite.load userdir.conf alias.load authz_dbm.load cgid.conf dir.load imagemap.load mime_magic.load setenvif.conf userdir.load asis.load authz_default.load cgid.load disk_cache.conf include.load negotiation.conf setenvif.load usertrack.load auth_basic.load authz_groupfile.load charset_lite.load disk_cache.load info.conf negotiation.load speling.load version.load auth_digest.load authz_host.load dav.load dump_io.load info.load proxy.conf ssl.conf vhost_alias.load authn_alias.load authz_owner.load dav_fs.conf env.load ldap.load proxy.load ssl.load wsgi.conf authn_anon.load authz_user.load dav_fs.load expires.load log_forensic.load proxy_ajp.load status.conf wsgi.load authn_dbd.load autoindex.conf dav_lock.load ext_filter.load mem_cache.conf proxy_balancer.load status.load authn_dbm.load autoindex.load dbd.load file_cache.load mem_cache.load proxy_connect.load substitute.load What is the best way for me to add PHP support to Apache 2 without breaking my current installation and configuration?

    Read the article

  • jQuery: AJAX umlauts & special characters are a mess

    - by rayne
    I've just created my first ajax function with jQuery which actually works, but unfortunately the character encoding (for characters like ä, ö, ü, ß, c, c, å, ø) is a nightmare. My files and my database are all UTF-8. I've tried a multitude of options in the ajax function and the PHP function, none of which were satisfactory. This is my ajax var dataString = { 'name': name, 'mail': mail // other stuff } $.ajax({ type: "POST", url: "/post.php", data: dataString, contentType: "application/x-www-form-urlencoded;charset=UTF-8", cache: false, success: function(html){ // do stuff } I've tried it without contentType: "application/x-www-form-urlencoded;charset=UTF-8" and I've tried to wrap the affected data in encodeURIComponent(), none of which worked. When I use that AJAX with htmlentities() in my php, my umlauts look like this in plain text: UE Ã?, AE Ã?, OE Ã?, ue ü, ae ä, oe o And like this in the database: UE Ãœ , AE Ä, OE Ö, ue ü, ae ä, oe o If I don't use htmlentities() but mysql_real_escape_string() instead (or neither), they look good in plain text, but they look like this in the database: AE Ä, OE Ö, UE Ãœ, ae ä oe ö ue ü I've been trying tons of options for hours now, but I can't find a solution that works. So far the only option I seem to have is having them look like a total mess in the database, but that would be very contraproductive if those data sets need to be edited.

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >