Search Results

Search found 45843 results on 1834 pages for 'network access'.

Page 609/1834 | < Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >

  • Deploying in windows 2008 [closed]

    - by Blisk
    Possible Duplicate: How is software deployed via Active Directory? I have trying to deploy a programs, mapped network drives and taks schedule job. I didn't manage to start a script on log on PC until I did it in Default Domain Policy. But when I did that I have installed programs on my server too. And mapped network drives, etc. So I don't know how to manage to do all of that but not to install anything on server over GPO. I still didn't manage to deploy a schedule task job on clients. I manage to put it on server bit not on clients. I need that schedule task because I need to update software on clients every week. I can't manual update flash player, adobe reader, firefox, etc on clients. And People useing client PC even doesn't know how to do it if they have abillity. Clients doesn't have admin rights.

    Read the article

  • Android device unable to obtain ip address connecting to AP created with Hostapd

    - by user114392
    I have a wired internet connection that works behind an authenticated proxy server. I followed the steps mentioned here and managed to create a hotspot which my google nexus 7 detects. However, it seems stuck at "obtaining an ip address" and is not able to connect to the internet. I initially received the following error message when running the script in the terminal: dnsmasq: failed to create listening socket for 127.0.0.1: Address already in use [fail] I figured it is because of a conflict with the network manager, I commented out the "dns=dnsmasq" line in the nm configuration file. After a network-manager restart, the first error doesn't show up but I get the following: Configuration file: /etc/hostapd.conf Failed to create interface mon.wlan0: -23 (Too many open files in system) Try to remove and re-create mon.wlan0 In both cases, however, the hotspot is created and is detected by my android device. only that it cannot "obtain an ip address" and connect to it. Is it because my eth0 connects via a proxy server? Or could there be something wrong with the dnsmasq config? Any help would be appreciated.

    Read the article

  • How does bridged networking work?

    - by agz
    How does bridged networking work? I have looked through the virtualbox manuals but nothing extremely technical came up (It was just a generic gloss over of the topic) How does it assign a different ip to the virtual machine but uses the same network card? Why does this different ip (I found using ip addr under linux) not show up under the "attached devices section" of my router but I can port forward to it? How come if I connect to a password protected wifi network, it does not require me to enter my password? Is this multihoming?

    Read the article

  • All nework interfaces hang for seconds while one interface goes up/down

    - by user3698377
    I am building a client/server application that uses several network interfaces in parallel for redundancy, and I have noticed that while one network interface goes down or goes up, the communication on other interfaces hangs for several seconds. I could reproduce this behavior without my application in a simple way: there are 2 interfaces available on computer 1 ( Ethernet and WiFi ) ping from computer 2 the IP address of the Ethernet connection of computer 1 disconnect the WiFi of computer 1 ping hangs for seconds, and then the packets are traveling again between the 2 computers. The hanging happens as well if I turn back on the WiFi connection on computer 1. It happens as well if I ping the WiFi IP, and turn off/on the Ethernet connection ( or unplug/plug the cable). I am using Linux Ubuntu 12.04 on both computers. Any ideas why is this happening, and if / how can it be avoided?

    Read the article

  • Corrupted .WAR file after transfer from 32-64 bit Windows Server to Desktop or vice versa

    - by Albert Widjaja
    Hi All, Does anyone experience this problem of corrupted .WAR file after it has been copied over the network share ? this is .WAR file (Web Archive) the J2EE application file (.WAR file is compressed with the same zip algorithm i think ?) Scenario 1: Windows Server 2008 x64 transfer into Windows XP using RDP client (Local Devices and Resources) Scenario 2: Windows XP 32 bit transfer into Windows Server 2003 x64 using shared network drive (port 445 SMB ?) for both of the scenario it always failed / corrupted (the source code seems to be duplicated at the end of line when you open up in the Eclipse / Java IDE). but when in both scenario i compressed the file into .ZIP file everything is OK. can anyone explains why this problem happens ? Thanks, Albert

    Read the article

  • Multiple vulnerabilities in Mozilla Firefox

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2011-2372 Permissions, Privileges, and Access Controls vulnerability 3.5 Firefox web browser Solaris 11 11/11 SRU 3 Solaris 10 Contact Support CVE-2011-2995 Denial of Service (DoS) vulnerability 10.0 CVE-2011-2997 Denial of Service (DoS) vulnerability 10.0 CVE-2011-3000 Improper Control of Generation of Code ('Code Injection') vulnerability 4.3 CVE-2011-3001 Permissions, Privileges, and Access Controls vulnerability 4.3 CVE-2011-3002 Denial of Service (DoS) vulnerability 9.3 CVE-2011-3003 Denial of Service (DoS) vulnerability 10.0 CVE-2011-3004 Improper Input Validation vulnerability 4.3 CVE-2011-3005 Denial of Service (DoS) vulnerability 9.3 CVE-2011-3232 Improper Control of Generation of Code ('Code Injection') vulnerability 9.3 CVE-2011-3648 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2011-3650 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 9.3 CVE-2011-3651 Denial of Service (DoS) vulnerability 10.0 CVE-2011-3652 Denial of Service (DoS) vulnerability 10.0 CVE-2011-3654 Denial of Service (DoS) vulnerability 10.0 CVE-2011-3655 Improper Control of Generation of Code ('Code Injection') vulnerability 9.3 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Java ME SDK 3.0.5 is released!

    - by SungmoonCho
      Java ME SDK 3.0.5 went live! For many months, we have been working hard to fix bugs from previous version, and add a lot of new features demanded by Java ME community. You can download the new version from this link. Please see below for more information. NetBeans Integration All Java ME tools are implemented as NetBeans plugins. Device Manager Java ME SDK now supports multiple device managers. You can switch between different versions of device managers. LWUIT 1.5 Support The Resource Editor is available from the Java ME menu to help you design and organize resources for LWUIT applications. For a description of LWUIT 1.5 features, visit the LWUIT download page Network Monitor Integrated with NetBeans profiling tools, the Network Monitor now supports WMA, SIP, Bluetooth and OBEX, SATSA APDU and JCRMI, and server sockets. CPU Profiler Now uses standard NetBeans profiling facilities to view snapshots. Profiling of VM classes can also be toggled on or off. WURFL Device Database The database has been updated with more than 1000 new devices. Tracing - New tracing functionality now includes CLDC VM events, and monitors events such as exceptions, class loading, garbage collection, and methods invocation. New or updated JSR support - Includes support for JSR 234 (Advanced Multimedia Supplements), JSR 253 (Mobile Telephony API), JSR 257 (Contactless Communication API), JSR 258 (Mobile User Interface Customization API), and JSR 293 (XML API for Java ME).

    Read the article

  • WebCenter Content shared folders for clustering

    - by Kyle Hatlestad
    When configuring a WebCenter Content (WCC) cluster, one of the things which makes it unique from some other WebLogic Server applications is its requirement for a shared file system.  This is actually not any different then 10g and previous versions of UCM when it ran directly on a JVM.  And while it is simple enough to say it needs a shared file system, there are some crucial details in how those directories are configured. And if they aren't followed, you may result in some unwanted behavior. This blog post will go into the details on how exactly the file systems should be split and what options are required. Beyond documents being stored on the file system and/or database and metadata being stored in the database along with other structured data, there is other information being read and written to on the file system.  Information such as user profile preferences, workflow item state information, metadata profiles, and other details are stored in files.  In addition, for certain processes within WCC, each of the nodes needs to know what the other nodes are doing so they don’t step on each other.  WCC keeps track of this through the use of lock files on the file system.  Because of this, each node of the WCC must have access to the same file system just as they have access to the same database. WCC uses its own locking mechanism using files, so it also needs to have access to those files without file attribute caching and without locking being done by the client (node).  If one of the nodes accesses a certain status file and it happens to be cached, that node might attempt to run a process which another node is already working on.  Or if a particular file is locked by one of the node clients, this could interfere with access by another node.  Unfortunately, when disabling file attribute caching on the file share, this can impact performance.  So it is important to only disable caching and locking on the particular folders which require it.  When configuring WebCenter Content after deploying the domain, it asks for 3 different directories: Content Server Instance Folder, Native File Repository Location, and Weblayout Folder.  And starting in PS5, it now asks for the User Profile Folder. Even if you plan on storing the content in the database, you still need to establish a Native File (Vault) and Weblayout directories.  These will be used for handling temporary files, cached files, and files used to deliver the UI. For these directories, the only folder which needs to have the file attribute caching and locking disabled is the ‘Content Server Instance Folder’.  So when establishing this share through NFS or a clustered file system, be sure to specify those options. For instance, if creating the share through NFS, use the ‘noac’ and ‘nolock’ options for the mount options. For the other directories, caching and locking should be enabled to provide best performance to those locations.   These directory path configurations are contained within the <domain dir>\ucm\cs\bin\intradoc.cfg file: #Server System PropertiesIDC_Id=UCM_server1 #Server Directory Variables IdcHomeDir=/u01/fmw/Oracle_ECM1/ucm/idc/ FmwDomainConfigDir=/u01/fmw/user_projects/domains/base_domain/config/fmwconfig/ AppServerJavaHome=/u01/jdk/jdk1.6.0_22/jre/ AppServerJavaUse64Bit=true IntradocDir=/mnt/share_no_cache/base_domain/ucm/cs/ VaultDir=/mnt/share_with_cache/ucm/cs/vault/ WeblayoutDir=/mnt/share_with_cache/ucm/cs/weblayout/ #Server Classpath variables #Additional Variables #NOTE: UserProfilesDir is only available in PS5 – 11.1.1.6.0UserProfilesDir=/mnt/share_with_cache/ucm/cs/data/users/profiles/ In addition to these folder configurations, it’s also recommended to move node-specific folders to local disk to avoid unnecessary traffic to the shared directory.  So on each node, go to <domain dir>\ucm\cs\bin\intradoc.cfg and add these additional configuration entries: VaultTempDir=<domain dir>/ucm/<cs>/vault/~temp/ TraceDirectory=<domain dir>/servers/<UCM_serverN>/logs/EventDirectory=<domain dir>/servers/<UCM_serverN>/logs/event/ And of course, don’t forget the cluster-specific configuration values to add as well.  These can be added through Admin Server -> General Configuration -> Additional Configuration Variables or directly in the <IntradocDir>/config/config.cfg file: ArchiverDoLocks=true DisableSharedCacheChecking=true ServiceAllowRetry=true    (use only with Oracle RAC Database)PublishLockTimeout=300000  (time can vary depending on publishing time and number of nodes) For additional information and details on clustering configuration, I highly recommend reviewing document [1209496.1] on the support site.  In addition, there is a great step-by-step guide on setting up a WebCenter Content cluster [1359930.1].

    Read the article

  • DNS server at router level vs. computer level

    - by Craig
    I've three questions. Is it better/faster/optimal to set up your server's preferred and alternate DNS servers in your OS's network settings or in your router settings? Will it cause problems if it is set up in both places, both pointing to the same IPs? I am running Windows and I have my network assign a static IP to one of my computers. This doesn't allow me to obtain the DNS server addresses from my router automatically. Is there an IP I can put in that will cause it to take the DNS server addresses from the router?

    Read the article

  • MIX 2010 Covert Operations Day 1

    - by GeekAgilistMercenary
    Portland Departure - Farewell Stumptown Off I go on a plane from Portland, Oregon to Las Vegas, Nevada for the MIX 2010 Conference.  Before I even boarded the plane I met Paul Gomes a Senior Software Engineer and Andrew Saylor the Director of Business Development.  Both of these SoftSource Employees were en route to MIX themselves.  Being stoked to already be bumping into some top tier people, I bid them adieu and headed for my seat on the plane. I boarded, and had before the boarding opted for an upgrade.  I have to advise that if you get a chance on Alaska to upgrade at the last minute, take it.  It is usually only about $50 bucks or so and the additional space makes working on the ole' laptop actually possible (even on my monstrous 17" laptop).  So take it from me, click that upgrade button and fork over that $50 bucks for anything over an hour flight, the comfort and ability to work is usually worth it! Las Vegas Arrival - Welcome to Sin City Got into Las Vegas and swung out of the airport.  I then, with my comrade Beth attempted to get Internet Access for the next 3 hours.  Las Vegas, is not the most friendly Internet Access town.  I will just say it, I am not sure why any Internet related company (ala Microsoft) would hold a conference here.  There are more than a dozen other cities that would be better. But I digress, I did manage to get Internet Access after checking into the Circus Circus.  Don't ask why I ended up staying here, if you run into me in person, ask then because there is a whole story to it. At this point I started checking out each session further on the MIX10 Site.  There are a number I deemed necessary to check out.  However, you'll have to read my pending entries to see which session I jumped into. With this juncture in time reached, I got a ton of work to wrap up, some code to write and some sleep to get.  Until tomorrow, adieu. For more of my writing, thoughts, and other topics check out my other blog, where the original entry is posted.

    Read the article

  • How to add a web folder via command line (Windows)

    - by Ryan
    I am trying to add a web folder via command line in windows. At first I though I should use the "net use" command, but when I tried I kept getting System error 67: C:net use * http://dev.subdomain.domain.tdl/dav/ the user name for 'dev.restech.niu.edu': correctusername the password for dev.restech.niu.edu: System error 67 has occurred. The network name cannot be found. The url I used works in a browser. It's an Apache dav on basic auth LDAP authentication method being used. Here's the thing... I CAN create a web folder when I use the "Add a network place" wizard. When I do net use, I don't see it listed in the prompt that follows. What utility do I need to use to mount a web folder in command line?

    Read the article

  • Remote installing an msi on citrix servers using WMI

    - by capn
    OK, I'm a C# programmer that is trying to streamline the deployment of a custom windows form app I inherited and built an installer for with WiX (this app will need to be reinstalled regularly as I'm making changes to it). I'm not really used to admin type things (or vbs, or WMI, or terminal servers, or Citrix, and even WiX and MSI are not things I usually deal with) but so far I put together some vbs and have an end goal in mind. The msi does work, and I've installed it from the mapped O: drive on my dev machine and while RDP'd to a citrix machine. End Goal: Deploy code written on my dev machine and compiled into an MSI (that I can improve upon within the confines of WiX and whatever the Windows Installer Engine allows) to the cluster of Citrix machines my users have access to. What am I missing in my script to get the MSI to execute on the remote machines? Layout: Machine A is my dev machine, and has the vbs script and the msi file (XP SP3) Machines C1 - C6 are the Citrix Servers that need the application installed them via the msi (Server 2003 R2 SP2) There is also optionally a shared network resource that all the machines can access. Script: 'Set WMI Constants Const wbemImpersonationLevelImpersonate = 3 Const wbemAuthenticationLevelPktPrivacy = 6 'Set whether this is installing to the debug Citrix Servers Const isDebug = true 'Set MSI location 'Network location yields error 1619 (This installation package could not be opened.) msiLocation = "\\255.255.255.255\odrive\Citrix Deployment\Setup.msi" 'Directory on machine A yields error 3 (file not found) 'msiLocation = "C:\Temp\Deploy\Setup.msi" 'Mapped network drive (on both machines) yield error 3 (file not found) 'msiLocation = "O:\Citrix Deployment\Setup.msi" 'Set login information strDomain = "MyDomain" Wscript.StdOut.Write "user name:" strUser = Wscript.StdIn.ReadLine Set objPassword = CreateObject("ScriptPW.Password") Wscript.StdOut.Write "password:" strPassword = objPassword.GetPassword() 'Names of Citrix Servers Dim citrixServerArray If isDebug Then citrixServerArray = array("C4") Else 'citrixServerArray = array("C1","C2","C3","C5","C6") End If 'Loop through each Citrix Server For Each citrixServer in citrixServerArray 'Login to remote computer Set objLocator = CreateObject("WbemScripting.SWbemLocator") Set objWMIService = objLocator.ConnectServer(citrixServer, _ "root\cimv2", _ strUser, _ strPassword, _ "MS_409", _ "ntlmdomain:" + strDomain) 'Set Remote Impersonation level objWMIService.Security_.ImpersonationLevel = wbemImpersonationLevelImpersonate objWMIService.Security_.AuthenticationLevel = wbemAuthenticationLevelPktPrivacy 'Reference to a process on the machine Dim objProcess : Set objProcess = objWMIService.Get("Win32_Process") 'Change user to install for terminal services errReturn = objProcess.Create _ ("cmd.exe /c change user /install", Null, Null, intProcessID) WScript.Echo errReturn 'Install MSI here 'Reference to a product on the machine Set objSoftware = objWMIService.Get("Win32_Product") 'All users set in option parameter, I'm led to believe that the third parameter is actually ignored 'http://www.webmasterkb.com/Uwe/Forum.aspx/vbscript/2433/Installing-programs-with-VbScript errReturn = objSoftware.Install(msiLocation,"ALLUSERS=2 REBOOT=ReallySuppress",True) Wscript.Echo errReturn 'Change user back to execute errReturn = objProcess.Create _ ("cmd.exe /c change user /execute", Null, Null, intProcessID) WScript.Echo errReturn Next I also tried using this to install, it doesn't return an error code, but doesn't install the msi either, and it makes me wonder if the change user /install command is even really working. errReturn = objProcess.Create _ ("cmd.exe /c msiexec /i ""O:\Citrix Deployment\Setup.msi"" /quiet") Wscript.Echo errReturn

    Read the article

  • What are the challenges when my enterprise desires to move the processing component of an applicatio

    - by Berkay
    Assume that i have an enterprise accounting application that consists of a front-end interface, a processing tier, and a back-end database. This is an application that contains private business data, and thus is traditionally run in a secure private network environment within the enterprise. What are the challenges that appear when my enterprise desires to move the processing component of this application to a cloud computing data center in order to achieve greater scalability or to reduce IT costs ? Pls note: do i have to make significant changes to my own infrastructure to enable external access to formerly private resources? do i have to modify the application code to handle new network topology ? thanks, if you give your answers in a simple manner, really appreciated.

    Read the article

  • Mac friendly file sharing from VirtualBox

    - by kitsched
    I have set up Ruby on Rails on Ubuntu into a VirtualBox instance on my PC, I enabled Samba and I'm connecting to it via the home network from my Mac. All is fine except that I have some issues deleting some files from inside applications e.g. in Sublime Text 2 when I right click a file in the browser and select delete nothing happens (same in my Git client). To be able to delete files I have to navigate to the folder in Finder (which leaves those nasty .DS_Store files scattered all around) or issue the delete command from the terminal (inconvenient). If you're asking why I'm using VirtualBox for Rails instead of doing the development directly on the Mac it's because the ease of portability. So my question is: are there any network sharing options which I could use to make the Linux instance play nicer with my Mac?

    Read the article

  • Joining computers from workgroup to active directory

    - by George R
    I have several computers at my office that I want to put on our domain. These computers currently are used by employees with local computer accounts and they have information stored under these accounts. When I join their computer to the domain how would or could I keep their current computer accounts and add them to the Active Directory so they could log in as usual access the network resources? Is this possible or do I need to just start from scratch on all this with their accounts and locally stored files? We are using Windows 2008 R2 and all systems being added have the Windows 7 pro or higher. All I want to really do is add the systems to the domain and have their accounts in the Active Directory so they can log in, access files which are already on their computer, and use network reosoures. If I can add their computer and then add same username and password to Active Directory to get this all to work that would be fine. I am just looking for minimal impact on the user really to get this done.

    Read the article

  • Great Blog Comments

    - by Paul Sorensen
    Just a quick note to let you know that in the interest of keeping the most useful content available here on the Oracle Certification Blog, we do moderate the comments. We welcome (and encourage dialog, questions, comments, etc) here on the topics at hand. We'll never 'censor' out a comment just because we don't like it - in fact, this is how we often learn ways in which we can do better. But of course we will filter out the typical list like anyone else: crude/offensive remarks, foul language, reference to illegal activity, etc. We will also often redirect any customer-service type inquiries to [email protected] where they can best be handled.Also, if you have a question of a general nature, please research it on the Oracle Certification website first. We often won't respond to questions asking such as "tell me how to get 11g ocp", as we've already made sure that you have that kind of information available. Now if we've inadvertently 'hidden' something on our site (gulp), then fair enough - please let us know that you're having a hard time finding it and we'll be sure to try and "unbury it" ;-)Additionally, you may have more of an 'opinion' type question, such as "should I do 'x' certification or 'y' certification." For these, we highly recommend checking on the Oracle Technology Network (OTN) Certification Forum, where you can engage in peer-to-peer discussions, share techniques, advice and best practices with others in the field.In the meantime, please continue to share your thoughts, ideas, opinions, tech tips etc - we look forward to seeing them and passing them wherever we can!QUICK LINKS:Oracle Certification WebsiteEmail - Customer ServiceOracle Technology Network (OTN) Certification Forum

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • PHPPgAdmin not working in Ubuntu 14.04

    - by Adam
    After a fresh install of Ubuntu 14.04, I've installed postgresql and phppgadmin from the Ubuntu repos. I am using the Apache2 webserver. PHP is working fine in the webserver, as is PHPMyAdmin, but PHPPgAdmin is not working. When I try to access it at localhost/phppgadmin, I get a 404 message. I've tried creating a symlink in /var/www to the phppgadmin content, but that doesn't seem to work. How do I fix this? EDIT: note that I am using a local proxy server (squid) through which I funnel all my online traffic. While this may be part of the problem, I would be surprised if it was, because I am still on the same machine as phppgadmin and the requests logged in the apache access log indicate that incoming requests for the page are coming from the local machine (which is allowed in the policies for phppgadmin, if I understand things correctly).

    Read the article

  • Best Practice for Images in Facebook Pages

    - by BFTrick
    Hello I am looking for best practices when it comes to uploading images on facebook fan pages. I notice on Dell, US Cellular, and the Gap that they load all of their images on one of their web servers. Are there any alternatives? I was hoping for something like being able to upload photos and then hide the gallery. But I couldn't find anything like that. The reason that I ask is that I have access to someone's facebook fan page but not to their webserver so it would be extra work to email someone who does have access the photos and have them upload them to their web server.

    Read the article

  • Copying files within a Workgroup

    - by Andrew La Grange
    I have three boxes operating in a Windows Server workgroup within a closed network. (No Domain / No AD) There are several derivations of the scenario that I'm about to outline, but I'm sure I will be able to retool the solution as and when I need. Essentially the boxes are: 2 x Windows Server 2008 R2 x64 Standard 1 x Windows Server 2000 Standard I need to be able to schedule the copying/and-or/moving of files from various directories and each of the boxes. Each box has a different username and password for the administrator. I have PowerShell 2.0 on the two Win2K8 boxes (obviously). Previously I have used mapped network drives to copy the files, and cmd line batches, but I'd much rather use Powershell if possible (with Shares and/or $ notation). However the Copy-Item cmdlet doesn't seem to be processing the Credential correctly. Perhaps some Powershell gurus out there might be able to help me. Essentially I'd like to schedule a PS run of script to push backup files onto my WIn2k box (old fileserver) periodically.

    Read the article

  • How to Share Files/Folders Between Windows XP, Vista, 7 and Fedora Linux

    - by Akshay Deep Lamba
    Getting started:   To get started, logon to Windows XP and click Start –> then right click ‘My Computer’ and select ‘Properties’.       Then select ‘Computer Name’ tab and click ‘Change’       Enter the Computer and Workgroup name and click OK. Make sure all systems use the same Workgroup name. You will have to restart your computer for the change to take effect.       After restarting, click Start –> Control Panel.       Select Security Center –> Windows Firewall.       When Windows Firewall opens, select ‘Exceptions’ tab and check the box to enable File and Printer Sharing. Close out when done.         Next, logon to Fedora and go to System –> Administration –> Add/Remove Software.       Then search for and install system-config-samba. Install all additional packages when prompted. Ensure that the Network Settings along with Correct Gateway is Mentioned so that your System can Access the Internet. system-config-samba     After installing, go to System –> Administration –> Samba.       Then select Preferences –> Server Settings.         Enter the Workgroup name here and click OK.       Select Preferences –> Samba Users.       Edit or Add User to samba database and click OK.       To create shares, click File –> Create Add Shares, then select the folder you wish to share and check: Writable Visible       Then select ‘Access’ tab and give users access to the shares, then click OK to save.       Next, go to System –> Administration –> Firewall.       Select ‘Samba’ under ‘Trusted Services’ and enable Samba.       Next, select ‘ICMP’ and enable ‘Echo Reply (pong) and Echo Request (ping)’      Also add the eth0 interface to the trusted interfaces.     After that go to Applications –> System Tools –> Terminal and run the command below:   su -c 'chkconfig smb on'     Restart your computer and if everything is setup correctly, you should be able to view shares from either system.           At the terminal: Quote: su setenforce 0 service smb restart service nmb restart exit   ENJOYYY....

    Read the article

  • Can't send email through Comcast SMTP to my domains

    - by Midnight Oil
    I am a Comcast customer with 3 computers and 3 computer users in the house. There are 2 fully updated Macs and one PC running Windows 7. We use Mail on the Macs, and Outlook on Windows 7. All computer accounts are configured to send mail through port 587 of smtp.comcast.net. I also have two personal domains registered with Network Solutions. For the sake of this discussion, call my domains myOwnDomain1.com and myOwnDomain2.com. I have email addresses at both domains. They are of the form [email protected] and [email protected]. Until recently, our email worked as expected. However, sometime between September 13, 2012 and September 19, 2012, we lost the ability to send email through Comcast's SMTP server to the email addresses at my personal domains. If we attempt to send email through Comcast's SMTP to those addresses, the email never arrives. Furthermore, the email clients give no indication of failure. The email just never arrives. The result is the same on all 3 computers and with all accounts on those computers. We can successfully send email through Comcast's SMTP from any of our accounts on any of our computers to any email address other than to my email addresses at my personal domains! However, I receive email at those domains that is not sent through smtp.comcast.net. For example, I can successfully send email from my gmail and yahoo accounts to my email addresses at my personal domains. Furthermore, I can successfully send email through smtp.myOwnDomain1.com to [email protected] and through smtp.myOwnDoman2.com to [email protected]. Comcast says the problem must be at Network Solutions. According to Network Solutions, their logs show they are not blocking reception of the email, and our IP address is not flagged as a spam source. They say the email is simply not arriving. Does anyone have any ideas why we can't send email through Comcast's SMTP server to my domains? As an odd coincidence, we recently noticed a change in Comcast's SMTP service. there is now a 5 minute delay on all outbound mail. Comcast's SMTP server seems to sit on the mail with a 5 minute timer.

    Read the article

  • Windows Azure Recipe: Software as a Service (SaaS)

    - by Clint Edmonson
    The cloud was tailor built for aspiring companies to create innovative internet based applications and solutions. Whether you’re a garage startup with very little capital or a Fortune 1000 company, the ability to quickly setup, deliver, and iterate on new products is key to capturing market and mind share. And if you can capture that share and go viral, having resiliency and infinite scale at your finger tips is great peace of mind. Drivers Cost avoidance Time to market Scalability Solution Here’s a sketch of how a basic Software as a Service solution might be built out: Ingredients Web Role – this hosts the core web application. Each web role will host an instance of the software and as the user base grows, additional roles can be spun up to meet demand. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model provides extensibility to hook into other industry specific identity providers as well. Databases – nearly every modern SaaS application is backed by a relational database for its core operational data. If the solution is sold to organizations, there’s a good chance multi-tenancy will be needed. An emerging best practice for SaaS applications is to stand up separate SQL Azure database instances for each tenant’s proprietary data to ensure isolation from other tenants. Worker Role – this is the best place to handle autonomous background processing such as data aggregation, billing through external services, and other specialized tasks that can be performed asynchronously. Placing these tasks in a worker role frees the web roles to focus completely on user interaction and data input and provides finer grained control over the system’s scalability and throughput. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Blobs (optional) – depending on the nature of the software, users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Training & Examples These links point to online Windows Azure training labs and examples where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. Developing Applications for the Cloud, 2nd Edition (eBook) This book demonstrates how you can create from scratch a multi-tenant, Software as a Service (SaaS) application to run in the cloud using the latest versions of the Windows Azure Platform and tools. The book is intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates applications and services that run on or interact with the cloud. Fabrikam Shipping (SaaS reference application) This is a full end to end sample scenario which demonstrates how to use the Windows Azure platform for exposing an application as a service. We developed this demo just as you would: we had an existing on-premises sample, Fabrikam Shipping, and we wanted to see what it would take to transform it in a full subscription based solution. The demo you find here is the result of that investigation See my Windows Azure Resource Guide for more guidance on how to get started, including more links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • SQL Server 2008 is not accessible from Windows Server 2008 ?

    - by Albert Widjaja
    Hi, I have successfully configured Windows Server 2008 Enterprise SP2 with SQL Server 2008 Enterprise SP2 all 64 bit, however when I tried to access this particular SQL Server 2008 DB instance from another SQL Server 2008 SSMS in another Windows Server 2008 it failed ? what I did is to disabled the IPv6 IP address using the regedit but still the problem hasn't been fixed even after restart ? I have enabled the named piped as well but still no luck ? any help please ? Here's the error message: " A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) "

    Read the article

  • OSX 10.9 Time Machine backup to NAS

    - by user214577
    I recently upgraded from 10.6.8 to 10.9. on snow leopard i was able to make time machine backups over the network to my nas, i think i had to tweak some settings but i dont recall what i did. now that i upgraded to mavericks, i cannot do backups to my nas using time machine. my question is, what do i have to do to allow time machine backups over the network in 10.9? i tried looking for solutions online but did not find anything relating to mavericks.

    Read the article

< Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >