Search Results

Search found 3179 results on 128 pages for 'merge replication'.

Page 73/128 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • git pull-push giving error

    - by ntidote
    Hi i cloned a local repository on another server http://localipaddress/git/project . It created an empty repository. When i tried to pull from the repository, it gave me an error Your configuration specifies to merge with the ref 'master' from the remote but no such ref was detected. On push i get the following message error:Cannot access url "http://localipaddress/git/project" , return code 22 . Fatal git-http-push failed What could have been wrong.

    Read the article

  • How to partition my two hard drives

    - by Thoma Bigueres
    I've got a computer running under the OS "Window Server 2008 R2" on which i have : 60GB disk C: NTFS (Disk 0) 40GB unallocated memory (Disk 1) I would like to partition my disk so that i'll have : 30GB disk C: 70GB disk D: Can you help me on the step i should do to be abble to have this configuration ? I saw that first of all i should merge the two volumes into one, but when i click right on the c: Volume, i can't click on the "Extend Volume" link. Do you know how i can overcome this ? Thanks a lot

    Read the article

  • Multiple pages per sheet (PDF)

    - by smihael
    I use the following commands pdf2ps input.pdf - | psnup -pA4 -4 >> output.ps ps2pdf output.ps output.pdf rm output.ps to merge multiple pages (in this case 4) from input file to one sheet in outupt file. How can I modify pipelining so that I won't have to use 2 commands, but just a single one liner? Is there any other commandline tool that would do the same and can work directly on pdf files?

    Read the article

  • Moving Binary logs in Mysql to a different harddisk

    - by Darini
    This question is about Mysql Binary logging.We need to move the Binary logging to a different hard disk . What is the configuration change required in Mysql?.Currently Binary logs go into the same folder as the ibdata and there is a replication slave running which needs the binary logs

    Read the article

  • Does anyone have a valid and working example of OpenLDAP meta backend?

    - by QXT
    I have been Google'ing my fingers off and simply can not find a working example of how to merge/proxy a OpenLDAP server and windows AD server. Have anyone worked with this before? Any suggestions would be appreciated. The idea is simple: openldap.mydomain.local ---- Linux LDAP Server winad.mydomain.local ---- Windows AD Server Some users are one Linux and some on WinAD. OpenLDAP should search both on login. A working example would be appreciated.

    Read the article

  • Can I combine two LANs into one to get double speed?

    - by Salaam
    Hey guys, I have 2 LANs in my own PC & 2 Routers (TP-Link WR941N each) with 2 internet accounts (512Kbps each) connected to the same ISP signal & company using NanoStation 5 from ubnt. As you know the connections work separately by default! (I use Windows 7 64bit) Can I merge them to get double speed (download/upload simultaneously) using regedit or special software or any other method? I'd be much appreciated if u could help me doing that Best Regards, Salaam

    Read the article

  • graphical svn client for creating branches, merging branches etc?

    - by ajsie
    hi i wonder if there are some GUI softwares to administrate a svn repo? or do you actually have to log into the ubuntu server with ssh and use all the svn commands to copy the trunk to a branch, merge the data back and forth, copy to a tag, delete and so on... im using netbeans in mac. i think it's only handling the communication between a local project and the repo. not the flows between trunc, branch and tag (creating, deleting, viewing differences etc)

    Read the article

  • CouchDB in Production

    - by NoelAdy
    I have been using CouchDB on some prototype applications and it has been brilliant, very easy to use and extremely quick. I was wondering if anyone has been using it in production and have any views on it's reliability, performance suitability for operational management etc ?? I am considering using it to support a service layer and would make use of its replication functionality. Any comments/experiences would be most welcome.

    Read the article

  • Good resource for studying Database High Availability techniques

    - by Invincible
    Hello Can anybody suggest some good resource/book on Database high availability techniques? Moreover, High-availability of system software like Intrusion Prevention system or Web servers. I am considering high-availability is global term which covers clustring, cloud computing, replication, replica management, distributed synchronization for cluster. Thanks in advance!

    Read the article

  • windows 2003 server : can't join domain

    - by phill
    I originally tried to rejoin a computer to a network which led to a "cannot find domain" error. The username/password box don't even come up. some tests i ran: I can ping the server, however I can't ping the domain name domain1.local. nslookup can't find the domain either. It looks to the isp's dns instead of my own to resolve the local machines. So i go to the dns and run netdiag.exe and gives me this error. DNS test . . . . . . . . . . . . . : Failed [WARNING] Cannot find a primary authoritative DNS server for the name 'stmartinsrv.stmartin.local.'. [RCODE_SERVER_FAILURE] The name 'srv.domain1.local.' may not be registered in DNS. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.156.1'. Please wait for 30 minutes for DNS server replication. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.157.1'. Please wait for 30 minutes for DNS server replication. [FATAL] No DNS servers have the DNS records for this DC registered. Redir and Browser test . . . . . . : Passed List of NetBt transports currently bound to the Redir NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The redir is bound to 1 NetBt transport. List of NetBt transports currently bound to the browser NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The browser is bound to 1 NetBt transport. then running dcdiag C:\Program Files\Support Toolsdcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\SRV Starting test: Connectivity The host 1c99f63c-49ec-40db-b3d3-6265c00fbd3e._msdcs.domain1.local cou ld not be resolved to an IP address. Check the DNS server, DHCP, server name, etc Although the Guid DNS name (1c99f63c-49ec-40db-b3d3-6265c00fbd3e._msdcs.domain1.local) couldn't be resolved, the server name (srv.domain1.local) resolved to the IP address (192.168.1.21) and was pingable. Check that the IP address is registered correctly with the DNS server. ......................... SRV failed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\SRV Skipping all tests, because server SRV is not responding to directory service requests Running partition tests on : ForestDnsZones Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Running partition tests on : DomainDnsZones Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : domain1 Starting test: CrossRefValidation ......................... domain1 passed test CrossRefValidation Starting test: CheckSDRefDom ......................... domain1 passed test CheckSDRefDom Running enterprise tests on : domain1.local Starting test: Intersite ......................... domain1.local passed test Intersite Starting test: FsmoCheck ......................... domain1.local passed test FsmoCheck from previous postings, I've tried adding the domain suffix to the nic ip properties to both the client machine and the dc server which didn't help. note: there is only one nic on the server any ideas? thanks in advance

    Read the article

  • Recommendations for distributed processing/distributed storage systems

    - by Eddie
    At my organization we have a processing and storage system spread across two dozen linux machines that handles over a petabyte of data. The system right now is very ad-hoc; processing automation and data management is handled by a collection of large perl programs on independent machines. I am looking at distributed processing and storage systems to make it easier to maintain, evenly distribute load and data with replication, and grow in disk space and compute power. The system needs to be able to handle millions of files, varying in size between 50 megabytes to 50 gigabytes. Once created, the files will not be appended to, only replaced completely if need be. The files need to be accessible via HTTP for customer download. Right now, processing is automated by perl scripts (that I have complete control over) which call a series of other programs (that I don't have control over because they are closed source) that essentially transforms one data set into another. No data mining happening here. Here is a quick list of things I am looking for: Reliability: These data must be accessible over HTTP about 99% of the time so I need something that does data replication across the cluster. Scalability: I want to be able to add more processing power and storage easily and rebalance the data on across the cluster. Distributed processing: Easy and automatic job scheduling and load balancing that fits with processing workflow I briefly described above. Data location awareness: Not strictly required but desirable. Since data and processing will be on the same set of nodes I would like the job scheduler to schedule jobs on or close to the node that the data is actually on to cut down on network traffic. Here is what I've looked at so far: Storage Management: GlusterFS: Looks really nice and easy to use but doesn't seem to have a way to figure out what node(s) a file actually resides on to supply as a hint to the job scheduler. GPFS: Seems like the gold standard of clustered filesystems. Meets most of my requirements except, like glusterfs, data location awareness. Ceph: Seems way to immature right now. Distributed processing: Sun Grid Engine: I have a lot of experience with this and it's relatively easy to use (once it is configured properly that is). But Oracle got its icy grip around it and it no longer seems very desirable. Both: Hadoop/HDFS: At first glance it looked like hadoop was perfect for my situation. Distributed storage and job scheduling and it was the only thing I found that would give me the data location awareness that I wanted. But I don't like the namename being a single point of failure. Also, I'm not really sure if the MapReduce paradigm fits the type of processing workflow that I have. It seems like you need to write all your software specifically for MapReduce instead of just using Hadoop as a generic job scheduler. OpenStack: I've done some reading on this but I'm having trouble deciding if it fits well with my problem or not. Does anyone have opinions or recommendations for technologies that would fit my problem well? Any suggestions or advise would be greatly appreciated. Thanks!

    Read the article

  • Magento hosting on a budget

    - by spa
    I have to do a setup for Magento. My constraint is primarily ease of setup and fault tolerance/fail over. Furthermore costs are an issue. I have three identical physical servers to get the job done. Each server node has an i7 quad core, 16GB RAM, and 2x3TB HD in a software RAID 1 configuration. Each node runs Ubuntu 12.04. right now. I have an additional IP address which can be routed to any of these nodes. The Magento shop has max. 1000 products, 50% of it are bundle products. I would estimate that max. 100 users are active at once. This leads me to the conclusion, that performance is not top priority here. My first setup idea One node (lb) runs nginx as a load balancer. The additional IP is used with domain name and routed to this node by default. Nginx distributes the load equally to the other two nodes (shop1, shop2). Shop1 and shop2 are configured equally: each server runs Apache2 and MySQL. The Mysqls are configured with master/slave replication. My failover strategy: Lb fails = Route IP to shop1 (MySQL master), continue. Shop1 fails = Lb will handle that automatically, promote MySQL slave on shop2 to master, reconfigure Magento to use shop2 for writes, continue. Shop2 fails = Lb will handle that automatically, continue. Is this a sane strategy? Has anyone done a similar setup with Magento? My second setup idea Another way to do it would be to use drbd for storing the MySQL data files on shop1 and shop2. I understand that in this scenario only one node/MySQL instance can be active and the other is used as hot standby. So in case shop1 fails, I would start up MySQL on shop2, route the IP to shop2, and continue. I like that as the MySQL setup is easier and the nodes can be configured 99% identical. So in this case the load balancer becomes useless and I have a spare server. My third setup idea The third way might be master-master replication of MySQL databases. However, in my optinion this might be tricky, as Magento isn't build for this scenario (e.g. conflicting ids for new rows). I would not do that until I have heard of a working example. Could you give me an advice which route to follow? There seems not one "good" way to do it. E.g. I read blog posts which describe a MySQL master/slave setup for Magento, but elsewhere I read, that data might get duplicated when the slave lags behind the master (e.g. when an order is placed, a customer might get created twice). I'm kind of lost here.

    Read the article

  • Duplicate DNS Zones (Error 4515 in Event Log )

    - by Campo
    I am getting these two error in the DNS Event log (errors at end of question). I have confirmed I do have duplicate zones. I am wondering which ones to delete. The DomainDNSZone contains all of our DNS records but it does not have the _msdcs zone.... that is in the ForestDNSZone with the duplicates that are not in use. here is a picture of that 3 Questions. I understand the advantages of having DNS in the ForestDNSZone. so... Why is DNS using the DomainDNSZone and is that acceptable considering _msdcs... is in the ForestDNSZone? If so, should I just delete the DC=1.168.192.in-addr.arpa and DC=supernova.local from the ForestDNSZone? Or should I try to get those to be the ones in use? What are those steps? I understand how to delete. That is simple but if i must move zones some info would be appreaciated there. Just to confirm. from my understanding. I can delete the two duplicates in the ForestDNSZone and leave the _msdcs.supernova.local as thats required there. This will resolve the erros I see. Just fyi when I look in those folders from the ForestDNSZone they have just 2 and 1 entries respectively. So obviously not in use compared to the others. I am pretty sure I understand the steps to complete this. But if you would like to provide that info, bonus points! Event Type: Warning Event Source: DNS Event Category: None Event ID: 4515 Date: 1/4/2011 Time: 2:14:18 PM User: N/A Computer: STANLEY Description: The zone 1.168.192.in-addr.arpa was previously loaded from the directory partition DomainDnsZones.supernova.local but another copy of the zone has been found in directory partition ForestDnsZones.supernova.local. The DNS Server will ignore this new copy of the zone. Please resolve this conflict as soon as possible. If an administrator has moved this zone from one directory partition to another this may be a harmless transient condition. In this case, no action is necessary. The deletion of the original copy of the zone should soon replicate to this server. If there are two copies of this zone in two different directory partitions but this is not a transient caused by a zone move operation then one of these copies should be deleted as soon as possible to resolve this conflict. To change the replication scope of an application directory partition containing DNS zones and for more details on storing DNS zones in the application directory partitions, please see Help and Support. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 89 25 00 00 %.. AND Event Type: Warning Event Source: DNS Event Category: None Event ID: 4515 Date: 1/4/2011 Time: 2:14:18 PM User: N/A Computer: STANLEY Description: The zone supernova.local was previously loaded from the directory partition DomainDnsZones.supernova.local but another copy of the zone has been found in directory partition ForestDnsZones.supernova.local. The DNS Server will ignore this new copy of the zone. Please resolve this conflict as soon as possible. If an administrator has moved this zone from one directory partition to another this may be a harmless transient condition. In this case, no action is necessary. The deletion of the original copy of the zone should soon replicate to this server. If there are two copies of this zone in two different directory partitions but this is not a transient caused by a zone move operation then one of these copies should be deleted as soon as possible to resolve this conflict. To change the replication scope of an application directory partition containing DNS zones and for more details on storing DNS zones in the application directory partitions, please see Help and Support. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 89 25 00 00 %..

    Read the article

  • VPS 512 MB RAM with WordPressMU comes to consumes lots of memory

    - by CAPitalZ
    I have googled for days and gathered all optimization suggestions and tried. My sites are not getting any high hits. May be like 100 hits per day [all my sites combined]. Here are my specs I have 512 MB RAM VPS with burstable 1024 MB. Centos 5 32-bit & cPanel/WHM Apache 2.2 MySQL 5.0 PHP 5.3.2 Here is my Configs I have 2 WordPressMU production sites, and 1 test site my.cnf # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/lib/mysql/mysql.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/lib/mysql/mysql.sock skip-locking skip-bdb skip-innodb key_buffer = 16M max_allowed_packet = 1M table_cache = 64 sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M #CAPitalZ thread_cache_size=8 thread_concurrency=4 #query_cache_type=1 #query_cache_limit=1M query_cache_size=16M concurrent_insert=2 low_priority_updates=1 max_connections=50 tmp_table_size=16M max_heap_table_size=16M join_buffer_size=1M interactive_timeout=25 wait_timeout=1000 #connect_timout=10 not able to restart mysql max_connect_errors=10 # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # skip-networking # Disable Federated by default skip-federated # Replication Master Server (default) # binary logging is required for replication log-bin=mysql-bin # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 [mysqld_safe] open_files_limit=8192 [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [myisamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout httpd.conf I have unselected many modules and recompiled using EasyApache in WHM. Only have the following modules built Deflate Expires Fileprotect Imagemap MPM Prefork Version [default] EAccelerator for PHP Bcmath Calendar CurlSSL [I'm using Curl. But I don't have any https sites] Expat GD [for image cropping] Gettext Imap Mbregex [default] Mbstring [need both Mbregex and Mbstring for utf-8] Mysql of the system MySQL "Improved" extension. Sockets TTF (FreeType) [I'm using custom font] Zlib Under Global Configuration I only have FollowSymLinks enabled I Have TraceEnable, ServerSignature, FileETag OFF ServerTokens ProductOnly DirectoryIndex Priority has index.php as the first one I have removed Clamd [Clam Anti-virus] SpamAssasin is Off Under Tweak Settings Default catch-all/default address behavior for new accounts. This is set to "fail" All stats programs turned off I have eAccelerator installed and checked in phpinfo and its working [Pre VirtualHost Include under WHM] Timeout 20 KeepAlive On MaxKeepAliveRequests 200 KeepAliveTimeout 3 MinSpareServers 1 MaxSpareServers 3 StartServers 1 ServerLimit 50 MaxClients 50 MaxRequestsPerChild 4000 ExtendedStatus Off #ServerType standalone this throws error HostnameLookups Off <Directory "/"> AllowOverride None </Directory> My sites will take ages to load and WHM/CPanel will not even load. adadaa.com/ http://adadaa.net/ kadais.ca/ My average memory consumption is like 1000 MB! [yes always bursting] The process that consumes most CPU and also most memory is mysql But I also get like 15 httpd processes [when its bursting] I already got warning from cpuwatchcheck saying "While processing, the cpu has been maxed out for more than a 6 hour period. The current load/uptime line on the server at the time of this email is 07:00:37 up 11:30, 0 users, load average: 14.64, 16.79, 20.07" I don't know, I have tried switching these config values many different times, but nothing seems to work. Please show some light... Thanks

    Read the article

  • A class meant for an alfresco behavior and its bean, how do they work and how are they deployed trough eclipse

    - by MrHappy
    (This is a partial repost of a question asked 10 days ago because only 1 part was answered(not included), I've rewritten it into a way better question and added 3 more tags) where do I put the DeleteAsset.class or why isn't it being found? I've put the compiled class from the bin of the workspace of eclipse into alfresco-4.2.c/tomcat/webapps/alfresco/WEB-INF/classes/com/openerp/behavior/ and right now it's giving me Error loading class [com.openerp.behavior.DeleteAsset] for bean with name 'deletionBehavior' defined in URL [file:/home/openerp/alfresco-4.2.c/tomcat/shared/classes/alfresco/extension/cust??om-web-context.xml]: problem with class file or dependent class; nested exception is java.lang.NoClassDefFoundError: com/openerp/behavior/DeleteAsset (wrong name: DeleteAsset) when I put it in there. (See bean below!) The code(I'd trying to work without the model class, idk if I made any silly mistakes on that): package com.openerp.behavior; import java.util.List; import java.net.*; import java.io.*; import org.alfresco.repo.node.NodeServicePolicies; import org.alfresco.repo.policy.Behaviour; import org.alfresco.repo.policy.JavaBehaviour; import org.alfresco.repo.policy.PolicyComponent; import org.alfresco.repo.policy.Behaviour.NotificationFrequency; import org.alfresco.repo.security.authentication.AuthenticationUtil; import org.alfresco.repo.security.authentication.AuthenticationUtil.RunAsWork; import org.alfresco.service.cmr.repository.ChildAssociationRef; import org.alfresco.service.cmr.repository.NodeRef; import org.alfresco.service.cmr.repository.NodeService; import org.alfresco.service.namespace.NamespaceService; import org.alfresco.service.namespace.QName; import org.alfresco.service.transaction.TransactionService; import org.apache.log4j.Logger; //this is the newer version //import com.openerp.model.openerpJavaModel; public class DeleteAsset implements NodeServicePolicies.BeforeDeleteNodePolicy { private PolicyComponent policyComponent; private Behaviour beforeDeleteNode; private NodeService nodeService; public void init() { this.beforeDeleteNode = new JavaBehaviour(this,"beforeDeleteNode",NotificationFrequency.EVERY_EVENT); this.policyComponent.bindClassBehaviour(QName.createQName("http://www.someco.com/model/content/1.0","beforeDeleteNode"), QName.createQName("http://www.someco.com/model/content/1.0","sc:doc"), this.beforeDeleteNode); } public setNodeService(NodeService nodeService){ this.nodeService = nodeService; } @Override public void beforeDeleteNode(NodeRef node) { System.out.println("beforeDeleteNode!"); try { QName attachmentID1= QName.createQName("http://www.someco.com/model/content/1.0", "OpenERPattachmentID1"); // this could/shoul be defined in your OpenERPModel-class int attachmentid = (Integer)nodeService.getProperty(node, attachmentID1); //int attachmentid = 123; URL oracle = new URL("http://0.0.0.0:1885/delete/%20?attachmentid=" + attachmentid); URLConnection yc = oracle.openConnection(); BufferedReader in = new BufferedReader(new InputStreamReader( yc.getInputStream())); String inputLine; while ((inputLine = in.readLine()) != null) //System.out.println(inputLine); in.close(); } catch(Exception e) { e.printStackTrace(); } } } This is my full custom-web-context file: <?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE beans PUBLIC '-//SPRING//DTD BEAN//EN' 'http://www.springframework.org/dtd/spring-beans.dtd'> <beans> <!-- Registration of new models --> <bean id="smartsolution.dictionaryBootstrap" parent="dictionaryModelBootstrap" depends-on="dictionaryBootstrap"> <property name="models"> <list> <value>alfresco/extension/scOpenERPModel.xml</value> </list> </property> </bean> <!-- deletion of attachments within openERP when delete is initiated in Alfresco--> <bean id="DeleteAsset" class="com.openerp.behavior.DeleteAsset" init-method="init"> <property name="NodeService"> <ref bean="NodeService" /> </property> <property name="PolicyComponent"> <ref bean="PolicyComponent" /> </property> </bean> and content type: <type name="sc:doc"> <title>OpenERP Document</title> <parent>cm:content</parent> There's also this when I open share An error has occured in the Share component: /share/service/components/dashlets/my-sites. It responded with a status of 500 - Internal Error. Error Code Information: 500 - An error inside the HTTP server which prevented it from fulfilling the request. Error Message: 09230001 Failed to execute script 'classpath*:alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js': 09230000 09230001 Failed during processing of IMAP server status configuration from Alfresco: 09230000 Unable to retrieve IMAP server status from Alfresco: 404 Server: Alfresco Spring WebScripts - v1.2.0 (Release 1207) schema 1,000 Time: Oct 23, 2013 11:40:06 AM Click here to view full technical information on the error. Exception: org.alfresco.error.AlfrescoRuntimeException - 09230001 Failed during processing of IMAP server status configuration from Alfresco: 09230000 Unable to retrieve IMAP server status from Alfresco: 404 org.alfresco.web.scripts.SingletonValueProcessorExtension.getSingletonValue(SingletonValueProcessorExtension.java:108) org.alfresco.web.scripts.SingletonValueProcessorExtension.getSingletonValue(SingletonValueProcessorExtension.java:59) org.alfresco.web.scripts.ImapServerStatus.getEnabled(ImapServerStatus.java:49) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:606) org.mozilla.javascript.MemberBox.invoke(MemberBox.java:155) org.mozilla.javascript.JavaMembers.get(JavaMembers.java:117) org.mozilla.javascript.NativeJavaObject.get(NativeJavaObject.java:113) org.mozilla.javascript.ScriptableObject.getProperty(ScriptableObject.java:1544) org.mozilla.javascript.ScriptRuntime.getObjectProp(ScriptRuntime.java:1375) org.mozilla.javascript.ScriptRuntime.getObjectProp(ScriptRuntime.java:1364) org.mozilla.javascript.gen.c6._c1(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js:4) org.mozilla.javascript.gen.c6.call(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js) org.mozilla.javascript.optimizer.OptRuntime.callName0(OptRuntime.java:108) org.mozilla.javascript.gen.c6._c0(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js:51) org.mozilla.javascript.gen.c6.call(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js) org.mozilla.javascript.ContextFactory.doTopCall(ContextFactory.java:393) org.mozilla.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:2834) org.mozilla.javascript.gen.c6.call(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js) org.mozilla.javascript.gen.c6.exec(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js) org.springframework.extensions.webscripts.processor.JSScriptProcessor.executeScriptImpl(JSScriptProcessor.java:318) org.springframework.extensions.webscripts.processor.JSScriptProcessor.executeScript(JSScriptProcessor.java:192) org.springframework.extensions.webscripts.AbstractWebScript.executeScript(AbstractWebScript.java:1305) org.springframework.extensions.webscripts.DeclarativeWebScript.execute(DeclarativeWebScript.java:86) org.springframework.extensions.webscripts.PresentationContainer.executeScript(PresentationContainer.java:70) org.springframework.extensions.webscripts.LocalWebScriptRuntimeContainer.executeScript(LocalWebScriptRuntimeContainer.java:240) org.springframework.extensions.webscripts.AbstractRuntime.executeScript(AbstractRuntime.java:377) org.springframework.extensions.webscripts.AbstractRuntime.executeScript(AbstractRuntime.java:209) org.springframework.extensions.webscripts.WebScriptProcessor.executeBody(WebScriptProcessor.java:310) org.springframework.extensions.surf.render.AbstractProcessor.execute(AbstractProcessor.java:57) org.springframework.extensions.surf.render.RenderService.process(RenderService.java:599) org.springframework.extensions.surf.render.RenderService.renderSubComponent(RenderService.java:505) org.springframework.extensions.surf.render.RenderService.renderChromeInclude(RenderService.java:1284) org.springframework.extensions.directives.ChromeIncludeFreeMarkerDirective.execute(ChromeIncludeFreeMarkerDirective.java:81) freemarker.core.Environment.visit(Environment.java:274) freemarker.core.UnifiedCall.accept(UnifiedCall.java:126) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.IfBlock.accept(IfBlock.java:82) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment.process(Environment.java:199) org.springframework.extensions.webscripts.processor.FTLTemplateProcessor.process(FTLTemplateProcessor.java:171) org.springframework.extensions.webscripts.WebTemplateProcessor.executeBody(WebTemplateProcessor.java:438) org.springframework.extensions.surf.render.AbstractProcessor.execute(AbstractProcessor.java:57) org.springframework.extensions.surf.render.RenderService.processRenderable(RenderService.java:204) org.springframework.extensions.surf.render.bean.ChromeRenderer.body(ChromeRenderer.java:95) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.bean.ChromeRenderer.render(ChromeRenderer.java:86) org.springframework.extensions.surf.render.RenderService.processComponent(RenderService.java:432) org.springframework.extensions.surf.render.bean.ComponentRenderer.body(ComponentRenderer.java:94) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.RenderService.renderComponent(RenderService.java:961) org.springframework.extensions.surf.render.RenderService.renderRegionComponents(RenderService.java:900) org.springframework.extensions.surf.render.RenderService.renderChromeInclude(RenderService.java:1263) org.springframework.extensions.directives.ChromeIncludeFreeMarkerDirective.execute(ChromeIncludeFreeMarkerDirective.java:81) freemarker.core.Environment.visit(Environment.java:274) freemarker.core.UnifiedCall.accept(UnifiedCall.java:126) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment.process(Environment.java:199) org.springframework.extensions.webscripts.processor.FTLTemplateProcessor.process(FTLTemplateProcessor.java:171) org.springframework.extensions.webscripts.WebTemplateProcessor.executeBody(WebTemplateProcessor.java:438) org.springframework.extensions.surf.render.AbstractProcessor.execute(AbstractProcessor.java:57) org.springframework.extensions.surf.render.RenderService.processRenderable(RenderService.java:204) org.springframework.extensions.surf.render.bean.ChromeRenderer.body(ChromeRenderer.java:95) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.bean.ChromeRenderer.render(ChromeRenderer.java:86) org.springframework.extensions.surf.render.bean.RegionRenderer.body(RegionRenderer.java:99) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.RenderService.renderRegion(RenderService.java:851) org.springframework.extensions.directives.RegionDirectiveData.render(RegionDirectiveData.java:91) org.springframework.extensions.surf.extensibility.impl.ExtensibilityModelImpl.merge(ExtensibilityModelImpl.java:408) org.springframework.extensions.surf.extensibility.impl.AbstractExtensibilityDirective.merge(AbstractExtensibilityDirective.java:169) org.springframework.extensions.surf.extensibility.impl.AbstractExtensibilityDirective.execute(AbstractExtensibilityDirective.java:137) freemarker.core.Environment.visit(Environment.java:274) freemarker.core.UnifiedCall.accept(UnifiedCall.java:126) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:179) freemarker.core.Environment.visit(Environment.java:428) freemarker.core.IteratorBlock.accept(IteratorBlock.java:102) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:179) freemarker.core.Environment.visit(Environment.java:428) freemarker.core.IteratorBlock.accept(IteratorBlock.java:102) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Macro$Context.runMacro(Macro.java:172) freemarker.core.Environment.visit(Environment.java:614) freemarker.core.UnifiedCall.accept(UnifiedCall.java:106) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.IfBlock.accept(IfBlock.java:82) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Macro$Context.runMacro(Macro.java:172) freemarker.core.Environment.visit(Environment.java:614) freemarker.core.UnifiedCall.accept(UnifiedCall.java:106) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment$3.render(Environment.java:246) org.springframework.extensions.surf.extensibility.impl.DefaultExtensibilityDirectiveData.render(DefaultExtensibilityDirectiveData.java:119) org.springframework.extensions.surf.extensibility.impl.ExtensibilityModelImpl.merge(ExtensibilityModelImpl.java:408) org.springframework.extensions.surf.extensibility.impl.AbstractExtensibilityDirective.merge(AbstractExtensibilityDirective.java:169) org.springframework.extensions.surf.extensibility.impl.AbstractExtensibilityDirective.execute(AbstractExtensibilityDirective.java:137) freemarker.core.Environment.visit(Environment.java:274) freemarker.core.UnifiedCall.accept(UnifiedCall.java:126) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment.visit(Environment.java:406) freemarker.core.BodyInstruction.accept(BodyInstruction.java:93) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Macro$Context.runMacro(Macro.java:172) freemarker.core.Environment.visit(Environment.java:614) freemarker.core.UnifiedCall.accept(UnifiedCall.java:106) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment.process(Environment.java:199) org.springframework.extensions.webscripts.processor.FTLTemplateProcessor.process(FTLTemplateProcessor.java:171) org.springframework.extensions.webscripts.WebTemplateProcessor.executeBody(WebTemplateProcessor.java:438) org.springframework.extensions.surf.render.AbstractProcessor.execute(AbstractProcessor.java:57) org.springframework.extensions.surf.render.RenderService.processTemplate(RenderService.java:721) org.springframework.extensions.surf.render.bean.TemplateInstanceRenderer.body(TemplateInstanceRenderer.java:140) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.bean.PageRenderer.body(PageRenderer.java:85) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.RenderService.renderPage(RenderService.java:762) org.springframework.extensions.surf.mvc.PageView.dispatchPage(PageView.java:411) org.springframework.extensions.surf.mvc.PageView.renderView(PageView.java:306) org.springframework.extensions.surf.mvc.AbstractWebFrameworkView.renderMergedOutputModel(AbstractWebFrameworkView.java:316) org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:250) org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1047) org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:817) org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:719) org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:644) org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:549) javax.servlet.http.HttpServlet.service(HttpServlet.java:621) javax.servlet.http.HttpServlet.service(HttpServlet.java:722) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) org.alfresco.web.site.servlet.MTAuthenticationFilter.doFilter(MTAuthenticationFilter.java:74) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) org.alfresco.web.site.servlet.SSOAuthenticationFilter.doFilter(SSOAuthenticationFilter.java:374) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222) org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123) org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472) org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:929) org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407) org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1002) org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:585) org.apache.tomcat.util.net.AprEndpoint$SocketWithOptionsProcessor.run(AprEndpoint.java:1771) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:724) Exception: org.springframework.extensions.webscripts.WebScriptException - 09230000 09230001 Failed during processing of IMAP server status configuration from Alfresco: 09230000 Unable to retrieve IMAP server status from Alfresco: 404 org.springframework.extensions.webscripts.processor.JSScriptProcessor.executeScriptImpl(JSScriptProcessor.java:324) Exception: org.springframework.extensions.webscripts.WebScriptException - 09230001 Failed to execute script 'classpath*:alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js': 09230000 09230001 Failed during processing of IMAP server status configuration from Alfresco: 09230000 Unable to retrieve IMAP server status from Alfresco: 404 org.springframework.extensions.webscripts.processor.JSScriptProcessor.executeScript(JSScriptProcessor.java:200) UPDATE: I think I've found the problem. Being a newbie to eclipse I haven't managed the dependecies well I think. Could anyone link me to a tutorial describing how to get org.alfresco.repo.node.NodeServicePolicies; as seen in import org.alfresco.repo.node.NodeServicePolicies; and other such imports into eclipse, I've got the alfresco source from svn but the tutorial I've found seems to fail me. java/lang/Error\00\F1Unresolved compilation problems: The declared package "com.openerp.behavior" does not match the expected package "java.com.openerp.behavior" The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.apache cannot be resolved The import com.openerp cannot be resolved NodeServicePolicies cannot be resolved to a type PolicyComponent cannot be resolved to a type Behaviour cannot be resolved to a type NodeService cannot be resolved to a type Behaviour cannot be resolved to a type JavaBehaviour cannot be resolved to a type NotificationFrequency cannot be resolved to a variable PolicyComponent cannot be resolved to a type QName cannot be resolved QName cannot be resolved Behaviour cannot be resolved to a type Return type for the method is missing NodeService cannot be resolved to a type NodeService cannot be resolved to a type NodeRef cannot be resolved to a type QName cannot be resolved to a type QName cannot be resolved NodeService cannot be resolved to a type \00\00\00\00\00(Ljava/lang/String;)V\00LineNumberTable\00LocalVariableTable\00this\00'Ljava/com/openerp/behavior/DeleteAsset;\00init\008Unresolved compilation problems: Behaviour cannot be resolved to a type JavaBehaviour cannot be resolved to a type NotificationFrequency cannot be resolved to a variable PolicyComponent cannot be resolved to a type QName cannot be resolved QName cannot be resolved Behaviour cannot be resolved to a type \00(LNodeRef;)V\00\00\B0Unresolved compilation problems: NodeRef cannot be resolved to a type QName cannot be resolved to a type QName cannot be resolved NodeService cannot be resolved to a type

    Read the article

  • How is the Linux repository administrated?

    - by David
    I am amazed by the Linux project and I would like to learn how they administrate the code, given the huge number of developers. I found the Linux repository on GitHub, but I do not understand how it is administrated. For example the following commit: https://github.com/torvalds/linux/commit/31fd84b95eb211d5db460a1dda85e004800a7b52 Notice the following part: So one authored and Torvalds committed. How is this possible. I thought that it was only possible to have either pull or pushing rights, but here it seems like there is an approval stage. I should mention that the specific problem I am trying to solve is that we use pull requests to our repo. The problem we are facing is that while a pull request is waiting to get merged, it is often broken by a commit. This leads to a seemingly never ending work to adapt the fork in order to make the pull request merge smoothly. Do Linux solve this by giving lots of people pushing rights (at least there are currently just three pull requests but hundreds of commits per day).

    Read the article

  • Interview with Lenz Grimmer about MySQL Connect

    - by Keith Larson
    Keith Larson: Thank you for allowing me to do this interview with you.  I have been talking with a few different Oracle ACEs   about the MySQL Connect Conference. I figured the MySQL community might be missing you as well. You have been very busy with Oracle Linux but I know you still have an eye on the MySQL Community. How have things been?Lenz Grimmer: Thanks for including me in this series of interviews, I feel honored! I've read the other interviews, and really liked them. I still try to follow what's going on over in the MySQL community and it's good to see that many of the familiar faces are still around. Over the course of the 9 years that I was involved with MySQL, many colleagues and contacts turned into good friends and we still maintain close relationships.It's been almost 1.5 years ago that I moved into my new role here in the Linux team at Oracle, and I really enjoy working on a Linux distribution again (I worked for SUSE before I joined MySQL AB in 2002). I'm still learning a lot - Linux in the data center has greatly evolved in so many ways and there are a lot of new and exciting technologies to explore. Keith Larson: What were your thoughts when you heard that Oracle was going to deliver the MySQL Connect conference to the MySQL Community?Lenz Grimmer: I think it's testament to the fact that Oracle deeply cares about MySQL, despite what many skeptics may say. What started as "MySQL Sunday" two years ago has now evolved into a full-blown sub-conference, with 80 sessions at one of the largest corporate IT events in the world. I find this quite telling, not many products at Oracle enjoy this level of exposure! So it certainly makes me feel proud to see how far MySQL has come. Keith Larson: Have you had a chance to look over the sessions? What are your thoughts on them?Lenz Grimmer: I did indeed look at the final schedule.The content committee did a great job with selecting these sessions. I'm glad to see that the content selection was influenced by involving well-known and respected members of the MySQL community. The sessions cover a broad range of topics and technologies, both covering established topics as well as recent developments. Keith Larson: When you get a chance, what sessions do you plan on attending?Lenz Grimmer: I will actually be manning the Oracle booth in the exhibition area on one of these days, so I'm not sure if I'll have a lot of time attending sessions. But if I do, I'd love to see the keynotes and catch some of the sessions that talk about recent developments and new features in MySQL, High Availability and Clustering . Quite a lot has happened and it's hard to keep up with this constant flow of new MySQL releases.In particular, the following sessions caught my attention: MySQL Connect Keynote: The State of the Dolphin Evaluating MySQL High-Availability Alternatives CERN’s MySQL “as a Service” Deployment with Oracle VM: Empowering Users MySQL 5.6 Replication: Taking Scalability and High Availability to the Next Level What’s New in MySQL Server 5.6? MySQL Security: Past and Present MySQL at Twitter: Development and Deployment MySQL Community BOF MySQL Connect Keynote: MySQL Perspectives Keith Larson: So I will ask you just like I have asked the others I have interviewed, any tips that you would give to people for handling the long hours at conferences?Lenz Grimmer: Wear comfortable shoes and make sure to drink a lot! Also prepare a plan of the sessions you would like to attend beforehand and familiarize yourself with the venue, so you can get to the next talk in time without scrambling to find the location. The good thing about piggybacking on such a large conference like Oracle OpenWorld is that you benefit from the whole infrastructure. For example, there is a nice schedule builder that helps you to keep track of your sessions of interest. Other than that, bring enough business cards and talk to people, build up your network among your peers and other MySQL professionals! Keith Larson: What features of the MySQL 5.6 release do you look forward to the most ?Lenz Grimmer: There has been solid progress in so many areas like the InnoDB Storage Engine, the Optimizer, Replication or Performance Schema, it's hard for me to really highlight anything in particular. All in all, MySQL 5.6 sounds like a very promising release. I'm confident it will follow the tradition that Oracle already established with MySQL 5.5, which received a lot of praise even from very critical members of the MySQL community. If I had to name a single feature, I'm particularly and personally happy that the precise GIS functions have finally made it into a GA release - that was long overdue. Keith Larson:  In your opinion what is the best reason for someone to attend this event?Lenz Grimmer: This conference is an excellent opportunity to get in touch with the key people in the MySQL community and ecosystem and to get facts and information from the domain experts and developers that work on MySQL. The broad range of topics should attract people from a variety of roles and relations to MySQL, beginning with Developers and DBAs, to CIOs considering MySQL as a viable solution for their requirements. Keith Larson: You will be attending MySQL Connect and have some Oracle Linux Demos, do you see a growing demand for MySQL on Oracle Linux ?Lenz Grimmer: Yes! Oracle Linux is our recommended Linux distribution and we have a good relationship to the MySQL engineering group. They use Oracle Linux as a base Linux platform for development and QA, so we make sure that MySQL and Oracle Linux are well tested together. Setting up a MySQL server on Oracle Linux can be done very quickly, and many customers recognize the benefits of using them both in combination.Because Oracle Linux is available for free (including free bug fixes and errata), it's an ideal choice for running MySQL in your data center. You can run the same Linux distribution on both your development/staging systems as well as on the production machines, you decide which of these should be covered by a support subscription and at which level of support. This gives you flexibility and provides some really attractive cost-saving opportunities. Keith Larson: Since I am a Linux user and fan, what is on the horizon for  Oracle Linux?Lenz Grimmer: We're working hard on broadening the ecosystem around Oracle Linux, building up partnerships with ISVs and IHVs to certify Oracle Linux as a fully supported platform for their products. We also continue to collaborate closely with the Linux kernel community on various projects, to make sure that Linux scales and performs well on large systems and meets the demands of today's data centers. These improvements and enhancements will then rolled into the Unbreakable Enterprise Kernel, which is the key ingredient that sets Oracle Linux apart from other distributions. We also have a number of ongoing projects which are making good progress, and I'm sure you'll hear more about this at the upcoming OpenWorld conference :) Keith Larson: What is something that more people should be aware of when it comes to Oracle Linux and MySQL ?Lenz Grimmer: Many people assume that Oracle Linux is just tuned for Oracle products, such as the Oracle Database or our Engineered Systems. While it's of course true that we do a lot of testing and optimization for these workloads, Oracle Linux is and will remain a general-purpose Linux distribution that is a very good foundation for setting up a LAMP-Stack, for example. We also provide MySQL RPM packages for Oracle Linux, so you can easily stay up to date if you need something newer than what's included in the stock distribution.One more thing that is really unique to Oracle Linux is Ksplice, which allows you to apply security patches to the running Linux kernel, without having to reboot. This ensures that your MySQL database server keeps up and running and is not affected by any downtime. Keith Larson: What else would you like to add ?Lenz Grimmer: Thanks again for getting in touch with me, I appreciated the opportunity. I'm looking forward to MySQL Connect and Oracle OpenWorld and to meet you and many other people from the MySQL community that I haven't seen for quite some time! Keith Larson:  Thank you Lenz!

    Read the article

  • Tracking pages with variables in GA

    - by Imran
    Recently I have updated my site, it now passes a variable on some links like so... www.mysite.com/1234/?play=true I've noticed in Google Analytics it records www.mysite.com/1234/ and www.mysite.com/1234/?play=true as two different URL's. Is there a way to merge them because they are after all just one page, It makes "Top Content" for example hard to read because of dupilicates. I've read about something called canonical link tag which may help this? My blog has this already inserted into the head but it doesnt make a difference. Any suggestions?

    Read the article

  • Google I/O 2011: Large-scale Data Analysis Using the App Engine Pipeline API

    Google I/O 2011: Large-scale Data Analysis Using the App Engine Pipeline API Brett Slatkin The Pipeline API makes it easy to analyze complex data using App Engine. This talk will cover how to build multi-phase Map Reduce workflows; how to merge multiple large data sources with "join" operations; and how to build reusable analysis components. It will also cover the API's concurrency model, how to debug in production, and built-in testing facilities. From: GoogleDevelopers Views: 3320 17 ratings Time: 51:39 More in Science & Technology

    Read the article

  • IBM DB2 and the “'DbProviderFactories' section can only appear once per config” error

    - by Davide Mauri
    IBM doesn’t like MS. That’s a fact. And that’s why you can get your machine.config file (!!!) corrupted if you try to install IBM DB2 data providers on your server machine. If at some point, after having installed IBM DB2 data providers your SSIS packages or SSAS cubes or SSRS Reports starts to complain that 'DbProviderFactories' section can only appear once per config you may want to check into you machine.config, located in the %runtime install path%\Config http://msdn.microsoft.com/en-us/library/ms229697%28v=vs.71%29.aspx Almost surely you’ll find a IBM DB2 Provider into an additional DbProviderFactories section all alone. Poor guy. Remove the double DBProviderFactories entry, and merge everything inside only one section DBProviderFactories and after that everything will start to work again.

    Read the article

  • Week in Geek: 50 Million Viruses and More on the Way Edition

    - by Asian Angel
    This week we learned how to backup and copy data between iOS devices, use Linux commands in Windows with Cygwin, boost email writing productivity with Microsoft Word Mail Merge, be more productive in Ubuntu using keyboard shortcuts, “restore the FTP service in XBMC, rename downloaded TV shows, access the Android Market in emulation”, and more Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Enjoy Clutter-Free YouTube Video Viewing in Opera with CleanTube Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper

    Read the article

  • SEO consequences for merging country sites in a .com

    - by Pekka
    I am in the process of refactoring a number of rental portals I've built for a company with locations in Austria, Germany, Switzerland, and the Netherlands. Instead of the current setting of each country site running under its own domain name: www.companyname.de www.companyname.ch www.companyname.at I would love to merge them all in this way: www.companyname.com/de www.companyname.com/ch www.companyname.com/at with the country TLDs doing a 301 redirect to the respective .com address. However, I have been repeatedly told not to do this due to likely problems with SEO - the business is very SEO dependent, and being a rental chain, needs to be strong in local results. So the question is: Is there an unavoidable hit in Search Engine Optimization when redirecting to a central .com domain? What measures can be taken to soften the blow? What comes to my mind is explicitly specifying a lang attribute in the html tag. Are there any other ways to specifically point out geographical location for sub-directories?

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >