Search Results

Search found 4763 results on 191 pages for 'policy administration'.

Page 100/191 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Application for google adsense rejected twice due to Unacceptable site content

    - by Bootcamp
    I have a technical blog site at techaxe.com . I put up technical articles over there quite frequently. I applied for google adsense an got my application rejected twice. The issue reported both times was Unacceptable site content. I read the content policy by google and found nothing that would indicate that the content i have on my blog is unacceptable to them. Can some one please guide me as to what should be done so that i can get my adsense application accepted.

    Read the article

  • Remote Desktop Connection Manager

    - by Robert May
    For years, I’ve been using the “Remote Desktops” mmc plugin to manage servers in our infrastructure.  I’ve upgraded to Windows 8 and Remote Desktops is nowhere to be found!  I search and searched and came across a forum listing saying “Why don’t you just use Remove Desktop Connection Manager?” I downloaded it and started using it and its WAY better than Remote Desktops!  I’m glad they took it out and I discovered this tool.  I wish I had discovered this two years ago! Technorati Tags: System Administration

    Read the article

  • Simple way to create a SQL Server Job Using T-SQL

    Sometimes we have a T-SQL process that we need to run that takes some time to run or we want to run it during idle time on the server. We could create a SQL Agent job manually, but is there any simple way to create a scheduled job? The seven tools in the SQL DBA Bundle support your core SQL Server database administration tasks.Make backups a breeze! Enjoy trouble-free troubleshooting! Make the most of monitoring! Download a free trial now.

    Read the article

  • how do i stop root from running a program

    - by joe Lovick
    I would like to prevent my root user from running certain applications that can change the permissions of files which in turn prevents normal users from running those applications again. for example, if i sudo to root, and then run thunderbird from the command prompt, it changes the permissions of files within my home dir / profile so i can no longer run it as a normal user; what i would like to do is prevent root from running thunderbird and hence stop this user error from repeating itself. any suggestions? to clarify, if i have a lot of administration to do i use "sudo -s" which gives me a root shell, its just once a year or so, i shoot myself in the foot.

    Read the article

  • The ETL from Hell - Diagnosing Batch System Performance Issues

    Too often, the batch systems that underlie a lot of database processing just grow without conscious design. When runs start to extend beyond their allotted time, and tuning no longer solves the problem, it is often discovered that batches are run in series, with draconian error handling. It is time to impose some rational design, and Nigel is a seasoned healer of batch processes. The seven tools in the SQL DBA Bundle support your core SQL Server database administration tasks.Make backups a breeze! Enjoy trouble-free troubleshooting! Make the most of monitoring! Download a free trial now.

    Read the article

  • MSXML4.0 SP2 is going out-of-support in April. 2010

    This is a friendly reminder that MSXML4.0 SP2 is going out-of-support on 4/13/2010. Applications using MSXML4.0 should be either migrated to MSXML6.0 or upgraded to MSXML 4.0 SP3. For more information about the Microsoft Support Lifecycle Policy, please visit the  Microsoft Lifecycle Website. You may ask why MSXML6.0 is being mentioned here and what relationship exists between two versions. Please read below for historical context: MSXML4.0 was released to the web in April 2001 to provide...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Is ignoring clients data during an automatic upgrading acceptable?

    - by A competent translator
    I recently faced a deletion of my calendar and notes data inside a third party application ( Horde ) on my online space. When I told the web hosting provider they replied that they were doing an upgrade to system software (!!) and client third party applications are not within their reach or backup policy. They don't even have a backup of my data. Is this practice acceptable from a web hosting provider, even for an individual clients ? I know that backing up my data is my responsibility but I anticipated that backup copies done by the host would be available when needed.

    Read the article

  • Google App Engine: How to be notified when APIs change or become available?

    - by herpylderp
    I am thinking about writing a GAE app but am a little hesitant because the EULA gives Google full rights to change their APIs anytime they want, for any reason. Obviously, they'd be out of business quick if they just 'upped and refactored their entire APIs, so I have to imagine they have some kind of notification system, perhaps even an RSS feed, etc. to notify developers well in advance of looming changes, or new features coming out in future releases. However, for the life of me I can't seem to find any trace of the existence of such a notification system. Perhaps the Google forums is the only place to get such updates? I guess I'm asking any battle-worn GAE veterans for re-assurance that there are reliable ways of getting notifications about policy or API changes from Google such that I could react and make the necessary app changes without production breaking or impacting any clients. Thanks in advance!

    Read the article

  • TFS Backup Plan Wizard Tool

    - by Enrique Lima
    With the release of the “September – 2010” TFS 2010 Power Tools, came an addition to the Team Foundation Server Administration Console.  This addition is the Team Foundation Backups Tree item.  The tool is used to create backup plans and to work with it you run through a wizard, just like you would in configuring TFS or any of the extensions it has. The areas covered through the tool include: Backup to a Network Backup Path, retention configuration. Under Advanced Options, the extension to be used for the Full and Transactional backups. The capability to include external databases, meaning, include the reporting databases and SharePoint databases as part of the plan. There are further options as you can see, that includes being able to define a task scheduler account, be able to set alerts for notifications on execution of the plans, and last the option to configure the schedule for the plan execution.  All in all a very good tool and great way to safeguard the investment you’ve made.

    Read the article

  • Dependency problem while missing package is already installed

    - by hakermania
    I am trying to install a program but I am getting a dependency error. The error clearly points out: Dependency is not satisfiable: libc6-amd64 (>= 2.14) I went on to investigate and I found out that I have 2.19 version installed, actually: alex@MaD-pc:~$ apt-cache policy libc6-amd64 libc6-amd64:i386: Installed: 2.19-0ubuntu6 Candidate: 2.19-0ubuntu6 Version table: *** 2.19-0ubuntu6 0 500 http://us.archive.ubuntu.com/ubuntu/ trusty/main i386 Packages 100 /var/lib/dpkg/status Why am I getting this error if I already have this package? I also should probably mention that the system is 100% up to date. I run the updates and upgrades, restarted the system and then tried to install the package again with the same error popping up. Edit 1: I am using amd64 but I have installed some 32-bit libraries required by some program installed via wine if I recall correctly.

    Read the article

  • Remote Diagnostic Agent (RDA) version 4.30

    - by inowodwo
    posted by Maurice Bauhahn Remote Diagnostic Agent (RDA) version 4.30 was released on December 11th A free download can be accessed via Knowledge Management article 314422.1 and installed in any Enterprise Performance Management 11.1.2.x environment. EPM-specific instructions are available in Knowledge Management article 1304885.1. This RDA version incorporates two new modules (EAS=Essbase Administration Services; HWA=Hyperion Web Analysis) and improvements in modules and profiles relating to twelve other Hyperion applications (EPM, EPMA, ESS, FCM, HFM, HFR, HIR, HPL, HPSV, HSS, PR, and HSV). To follow best practice, run related RDA profiles [for example: "perl rda.pl -vnSCRPp Hyperion1112_EAS"] and attach the output zip file [by default in \rda\output\] to your service requests. The comprehensive set of details provided in such output files should help technicians to avoid delays in handling service requests (by avoiding ping-pong communications resulting from repeated requests for additional values).

    Read the article

  • Redirect from domain to other one

    - by Michal
    I deal with the following problem. I am customer of one domain reseller, which has fair prices, fine administration, and I have all my domains registered by it. Recently I've created a new webpage using a free web service (those sites where you can create a simple webpages with some template after few clicks). This new web page has default address in the following form: "pagename.provider.cz", but I want to use my own domain "pagename.cz". And that is the problem, because the provider would assign domain name to my presentation only if I registered mentioned domain by him. That wouldn't be a problem, but he is three times more expensive then my favorite one. So I am thinking about registering domain "pagename.cz" under my favorite registrator and then making 301 PHP redirect from it to "pagename.provider.cz". Shall this affect (negatively) my domain ranking? Are there any catches which I shoud care about?

    Read the article

  • Alternatives to sql like databases

    - by user613326
    Well i was wondering these days computers usually have 2GB or 4GB memory I like to use some secure client server model, and well an sql database is likely candidate. On the other hand i only have about 8000 records, who will not frequently be read or written in total they would consume less then 16 Megabyte. And it made me wonder what would be good secure options in a windows environment to store the data work with it multi-client single server model, without using SQL or mysql Would for well such a small amount of data maybe other ideas better ? Because i like to keep maintenance as simple as possible (no administrators would need to know sql maintenance, as they dont know databases in my target environment) Maybe storing in xml files or.. something else. Just wonder how others would go if ease of administration is the main goal. Oh and it should be secure to, the client server data must be a bit secure (maybe NTLM files shares https or...etc)

    Read the article

  • How to fix "Could not open lock file" because "Permission denied"?

    - by user66498
    Whenever trying to install any software and update manger, I get an error stating Package operation failed The installation or removal of a software package failed When I run sudo apt-get update I got this result: conan51xd@conan51xd-Lenovo-B470:~$ sudo apt-get -f install [sudo] password for conan51xd: Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. conan51xd@conan51xd-Lenovo-B470:~$ apt-get update E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied) E: Unable to lock directory /var/lib/apt/lists/ E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied) E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

    Read the article

  • Install problem on UEFI, Windows 8 new desktop

    - by albuquerque mat
    I just bought an HP desktop, p6-2326s, which has windows 8 installed. I have tried booting a Ubuntu live disc but the machine won't boot it. When I bring up the UEFI boot menu it offers a selection of UEFI BOOT SOURCES, windows boot manager, dvd drive, IP4 Ethernet controller, or IP6 Ethernet controller. If I select the dvd drive with the CD in it I get the message "Secure boot violation, Invalid structure detected. Check secure boot policy in setup." With all other selections it just boots into windows. So where do I go from here?

    Read the article

  • Hands-on GlassFish FREE Course covering Deployment, Class Loading, Clustering, etc.

    - by arungupta
    René van Wijk, an Oracle ACE Director and a prolific blogger at middlewaremagic.com has shared contents of a FREE hands-on course on GlassFish. The course provides an introduction to GlassFish internals, JVM tuning, Deployment, Class Loading, Security, Resource Configuration, and Clustering. The self-paced hands-on instructions guide through the process of installing, configuring, deploying, tuning and other aspects of application development and deployment on GlassFish. The complete course material is available here. This course can also be taken as a paid instructor-led course. The attendees will get their own VM and will have plenty of time for Q&A and discussions. Register for this paid course. Oracle Education also offers a similar paid course on Oracle GlassFish Server 3.1: Administration and Deployment.

    Read the article

  • From where can I install my nVidia drivers? [closed]

    - by Arthur Wulf White
    Possible Duplicate: How do I install extra drivers? Additional Drivers tool in Ubuntu 12.10? I have read here and here that I should be able to install drivers. So I'm finding the Additional Drivers menu. To install nVidia driver. I started looking for System -> Administration and not finding it. I have an icon that says System Settings and it has any option related to drivers. NOTE: I am using Ubuntu 12.10.

    Read the article

  • Syncing Files between workgroup server and Ubuntu workstation

    - by dotdawtdaught
    Recently I have decided that I can't make Windows 8 my primary OS on my laptop as it is just too cumbersome to deal with. I am made the switch to Ubuntu and so far so good. Using Windows I have been able to cache folders on my workgroup server using a feature called "Client Side Cache" that allows me to take a copy of my personal files offline while I am in the field, then later I when I return any changes get pushed up to the server and my local cache is refreshed. This feature is completely client driven although characteristics of it (who and what can be cached, and if caching is automatic) can be controlled via a policy assigned as part of a directory membership. Can anyone suggest a linux replacement for this feature? Is there a better way of handling this?

    Read the article

  • Cannot start service SPUserCodeV4 on computer

    - by ybbest
    When you create a sand boxed solution for SharePoint 2010 in Visual Studio 2010 and try to deploy the solution , you could get the error “Error occurred in deployment step ‘Retract Solution’: Cannot start service SPUserCodeV4 on computer”(See the Picture1 below). In order to fix this , you need to go to Central Administration -> System Settings -> Manage services on server and start service “Microsoft SharePoint Foundation User Code Service” If you are developing SharePoint on DC(Domain controller),you need to check the solution from my previous post. Error message.(Picture1) Locate Microsoft SharePoint Foundation User Code Service.(Picture 2)

    Read the article

  • How Data Transfers differ on Smart Phones: Iphone vs. Android vs. Windows Phone

    - by MCH
    I am interested in how each individual smart phone is allowed to handle data transfers within a third-party app. I am interested in designing apps that allow customers to update, transfer, download, etc. data from their smart phone to their personal computer and vice-versa. (Ranging from just text, to XML, to a Relational Database) I only have experience with the Ipod Touch before and one particular app that maintained all the data on an online server, so to update the data on your pc or iphone you had to go online, are there other ways to do it? Like bluetooth, wireless LAN, USB, etc? I believe Apple has certain policies on this in order to control the App Store and individual Iphones. I suppose each company has a particular policy on how an app is allowed to transfer data to another system, does anyone have a good understanding of this? Thank you.

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • Data Source Security Part 4

    - by Steve Felts
    So far, I have covered Client Identity and Oracle Proxy Session features, with WLS or database credentials.  This article will cover one more feature, Identify-based pooling.  Then, there is one more topic to cover - how these options play with transactions.Identity-based Connection Pooling An identity based pool creates a heterogeneous pool of connections.  This allows applications to use a JDBC connection with a specific DBMS credential by pooling physical connections with different DBMS credentials.  The DBMS credential is based on either the WebLogic user mapped to a database user or the database user directly, based on the “use database credentials” setting as described earlier. Using this feature enabled with “use database credentials” enabled seems to be what is proposed in the JDBC standard, basically a heterogeneous pool with users specified by getConnection(user, password). The allocation of connections is more complex if Enable Identity Based Connection Pooling attribute is enabled on the data source.  When an application requests a database connection, the WebLogic Server instance selects an existing physical connection or creates a new physical connection with requested DBMS identity. The following section provides information on how heterogeneous connections are created:1. At connection pool initialization, the physical JDBC connections based on the configured or default “initial capacity” are created with the configured default DBMS credential of the data source.2. An application tries to get a connection from a data source.3a. If “use database credentials” is not enabled, the user specified in getConnection is mapped to a DBMS credential, as described earlier.  If the credential map doesn’t have a matching user, the default DBMS credential is used from the datasource descriptor.3b. If “use database credentials” is enabled, the user and password specified in getConnection are used directly.4. The connection pool is searched for a connection with a matching DBMS credential.5. If a match is found, the connection is reserved and returned to the application.6. If no match is found, a connection is created or reused based on the maximum capacity of the pool: - If the maximum capacity has not been reached, a new connection is created with the DBMS credential, reserved, and returned to the application.- If the pool has reached maximum capacity, based on the least recently used (LRU) algorithm, a physical connection is selected from the pool and destroyed. A new connection is created with the DBMS credential, reserved, and returned to the application. It should be clear that finding a matching connection is more expensive than a homogeneous pool.  Destroying a connection and getting a new one is very expensive.  If you can use a normal homogeneous pool or one of the light-weight options (client identity or an Oracle proxy connection), those should be used instead of identity based pooling. Regardless of how physical connections are created, each physical connection in the pool has its own DBMS credential information maintained by the pool. Once a physical connection is reserved by the pool, it does not change its DBMS credential even if the current thread changes its WebLogic user credential and continues to use the same connection. To configure this feature, select Enable Identity Based Connection Pooling.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableIdentityBasedConnectionPooling.html  "Enable identity-based connection pooling for a JDBC data source" in Oracle WebLogic Server Administration Console Help. You must make the following changes to use Logging Last Resource (LLR) transaction optimization with Identity-based Pooling to get around the problem that multiple users will be accessing the associated transaction table.- You must configure a custom schema for LLR using a fully qualified LLR table name. All LLR connections will then use the named schema rather than the default schema when accessing the LLR transaction table.  - Use database specific administration tools to grant permission to access the named LLR table to all users that could access this table via a global transaction. By default, the LLR table is created during boot by the user configured for the connection in the data source. In most cases, the database will only allow access to this user and not allow access to mapped users. Connections within Transactions Now that we have covered the behavior of all of these various options, it’s time to discuss the exception to all of the rules.  When you get a connection within a transaction, it is associated with the transaction context on a particular WLS instance. When getting a connection with a data source configured with non-XA LLR or 1PC (using the JTS driver) with global transactions, the first connection obtained within the transaction is returned on subsequent connection requests regardless of the values of username/password specified and independent of the associated proxy user session, if any. The connection must be shared among all users of the connection when using LLR or 1PC. For XA data sources, the first connection obtained within the global transaction is returned on subsequent connection requests within the application server, regardless of the values of username/password specified and independent of the associated proxy user session, if any.  The connection must be shared among all users of the connection within a global transaction within the application server/JVM.

    Read the article

  • Ubuntu 12.04 fails to find Intel HD Graphics 3000

    - by user69785
    On my Windows 7 installation an Intel HD Graphics 3000 card/driver shows. However in Ubuntu 12.04, System ? Administration ? Hardware Drivers shows no proprietary drivers available for the system. I have tried running the following: sudo apt-get install mesa-utils Which results in the graphics driver incorrectly identifying itself as Sandy Bridge Mobile. Running the following results in no change sudo add-apt-repository ppa:xorg-edgers/ppa sudo apt-get update && sudo apt-get upgrade sudo reboot Does anybody have any information on this behavior?

    Read the article

  • Do they ask too much on this job?

    - by user58404
    I am looking for web developer job and this job description caught my eyes. I am not sure how much they offer but I was wondering if anyone here meets all of their requirements? To me, that's a lot of knowledge. 2 to 4+ years experience building web sites and applications in a professional environment Strong working knowledge of HTML5 and CSS3 Strong working knowledge of JavaScript, jQuery, AJAX Working knowledge of Ruby on Rails or similar MVC framework Working knowledge of ExpressionEngine, Wordpress or similar CMS Experience administering a LAMP-based server Experience with cross-platform and cross-browser website testing Comfortable working with version control (preferably Git) Proficient with Adobe Photoshop, Illustrator, and Fireworks Comfortable working on a Mac Self-starter with excellent time-management skills with the ability to meet challenging deadlines Ability to work independently with minimal supervision Desire to work on a small team Bonus Skills: Experience deploying to Heroku or similar PaaS provider. Experience developing Facebook applications A strong sense of design Cool open source projects (send us your Github account!) Advanced working knowledge of server administration and website deployment. Java and/or .NET experience

    Read the article

  • Oracle Endeca Training UK - April 10th, 11th

    - by Grant Schofield
    By popular demand we have decided to hold a second Oracle Endeca Information Discovery Training event in the UK to accommodate those who were unable to get into the first. Training is currently being planned for early May in Switzerland, end of May in Germany and mid-June in Istanbul at the EMEA BI annual event. Date: 10th - 11th April Venue: Oracle Reading, UK A registration link will be sent out shortly, but to avoid disappointment please bookmark this article and block these two days in your diary. You can also register interest by emailing me directly and I will operate a first come, first serve policy - [email protected]

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >