Search Results

Search found 4139 results on 166 pages for 'creation kit'.

Page 28/166 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • How do you set the default user in Linux for file creation?

    - by Not a Name
    I want to create a directory, for example: /public/all But I want it so that if you create a file in all, the owner is root, but anyone with access to the /public/all folder can delete/edit/etc the file, just not change the permissions. (I will use a self-created "setx" application to change the execute value if needed.) Reason for this, I don't want you to be able to deny other users write/read access to files in /public/all. I heard setuid on directories doesn't work for that.

    Read the article

  • Crashes in Core Data's Inferred Mapping Model Creation (Lightweight Migration). Threading Issue?

    - by enchilada
    I'm getting random crashes when creating an inferred mapping model (with Core Data's lightweight migration) within my application. By the way, I have to do it programmatically in my application while it is running. This is how I create this model (after I have made proper currentModel and newModel objects, of course): NSMappingModel *mappingModel = [NSMappingModel inferredMappingModelForSourceModel:currentModel destinationModel:newModel error:&error]; The problem is this: This method is crashing randomly. When it works, it works just fine without issues. But when it crashes, it crashes my application (instead of returning nil to signify that the method failed, as it should). By randomly, I mean that sometimes it happens and sometimes not. It is unpredictable. Now, here is the deal: I'm running this method in another thread. More precisely, it is located inside a block that is passed via GCD to run on the global main queue. I need to do this for my UI to appear crisp to the user, i.e. so that I can display a progress indicator while the work is underway. The strange thing seems to be that if I remove the GCD stuff and just let it run on the main thread, it seems to be working fine and never crashing. Thus, could it be because I'm running this on a different thread that this is crashing? I somehow find that weird because I don't believe I'm breaking any Core Data rules regarding multi-threading. In particular, I'm not passing any managed objects around, and whenever I need access to the MOC, I create a new MOC, i.e. I'm not relying on any MOC (or for that matter: anything) that has been created earlier on the main thread. Besides the little MOC stuff that occurs, occurs after the mapping model creation method, i.e. after the point at which the app crashes, so it can't possibly be a cause of the crashes under consideration here. All I'm doing is taking two MOMs and asking for a mapping model between them. That can't be wrong even under threading, now can it? Any ideas on what could be going on?

    Read the article

  • Rkhunter 122 suspect files; do I have a problem?

    - by user276166
    I am new to ubuntu. I am using Xfce Ubuntu 14.04 LTS. I have ran rkhunter a few weeks age and only got a few warnings. The forum said that they were normal. But, this time rkhunter reported 122 warnings. Please advise. casey@Shaman:~$ sudo rkhunter -c [ Rootkit Hunter version 1.4.0 ] Checking system commands... Performing 'strings' command checks Checking 'strings' command [ OK ] Performing 'shared libraries' checks Checking for preloading variables [ None found ] Checking for preloaded libraries [ None found ] Checking LD_LIBRARY_PATH variable [ Not found ] Performing file properties checks Checking for prerequisites [ Warning ] /usr/sbin/adduser [ Warning ] /usr/sbin/chroot [ Warning ] /usr/sbin/cron [ OK ] /usr/sbin/groupadd [ Warning ] /usr/sbin/groupdel [ Warning ] /usr/sbin/groupmod [ Warning ] /usr/sbin/grpck [ Warning ] /usr/sbin/nologin [ Warning ] /usr/sbin/pwck [ Warning ] /usr/sbin/rsyslogd [ Warning ] /usr/sbin/useradd [ Warning ] /usr/sbin/userdel [ Warning ] /usr/sbin/usermod [ Warning ] /usr/sbin/vipw [ Warning ] /usr/bin/awk [ Warning ] /usr/bin/basename [ Warning ] /usr/bin/chattr [ Warning ] /usr/bin/cut [ Warning ] /usr/bin/diff [ Warning ] /usr/bin/dirname [ Warning ] /usr/bin/dpkg [ Warning ] /usr/bin/dpkg-query [ Warning ] /usr/bin/du [ Warning ] /usr/bin/env [ Warning ] /usr/bin/file [ Warning ] /usr/bin/find [ Warning ] /usr/bin/GET [ Warning ] /usr/bin/groups [ Warning ] /usr/bin/head [ Warning ] /usr/bin/id [ Warning ] /usr/bin/killall [ OK ] /usr/bin/last [ Warning ] /usr/bin/lastlog [ Warning ] /usr/bin/ldd [ Warning ] /usr/bin/less [ OK ] /usr/bin/locate [ OK ] /usr/bin/logger [ Warning ] /usr/bin/lsattr [ Warning ] /usr/bin/lsof [ OK ] /usr/bin/mail [ OK ] /usr/bin/md5sum [ Warning ] /usr/bin/mlocate [ OK ] /usr/bin/newgrp [ Warning ] /usr/bin/passwd [ Warning ] /usr/bin/perl [ Warning ] /usr/bin/pgrep [ Warning ] /usr/bin/pkill [ Warning ] /usr/bin/pstree [ OK ] /usr/bin/rkhunter [ OK ] /usr/bin/rpm [ Warning ] /usr/bin/runcon [ Warning ] /usr/bin/sha1sum [ Warning ] /usr/bin/sha224sum [ Warning ] /usr/bin/sha256sum [ Warning ] /usr/bin/sha384sum [ Warning ] /usr/bin/sha512sum [ Warning ] /usr/bin/size [ Warning ] /usr/bin/sort [ Warning ] /usr/bin/stat [ Warning ] /usr/bin/strace [ Warning ] /usr/bin/strings [ Warning ] /usr/bin/sudo [ Warning ] /usr/bin/tail [ Warning ] /usr/bin/test [ Warning ] /usr/bin/top [ Warning ] /usr/bin/touch [ Warning ] /usr/bin/tr [ Warning ] /usr/bin/uniq [ Warning ] /usr/bin/users [ Warning ] /usr/bin/vmstat [ Warning ] /usr/bin/w [ Warning ] /usr/bin/watch [ Warning ] /usr/bin/wc [ Warning ] /usr/bin/wget [ Warning ] /usr/bin/whatis [ Warning ] /usr/bin/whereis [ Warning ] /usr/bin/which [ OK ] /usr/bin/who [ Warning ] /usr/bin/whoami [ Warning ] /usr/bin/unhide.rb [ Warning ] /usr/bin/mawk [ Warning ] /usr/bin/lwp-request [ Warning ] /usr/bin/heirloom-mailx [ OK ] /usr/bin/w.procps [ Warning ] /sbin/depmod [ Warning ] /sbin/fsck [ Warning ] /sbin/ifconfig [ Warning ] /sbin/ifdown [ Warning ] /sbin/ifup [ Warning ] /sbin/init [ Warning ] /sbin/insmod [ Warning ] /sbin/ip [ Warning ] /sbin/lsmod [ Warning ] /sbin/modinfo [ Warning ] /sbin/modprobe [ Warning ] /sbin/rmmod [ Warning ] /sbin/route [ Warning ] /sbin/runlevel [ Warning ] /sbin/sulogin [ Warning ] /sbin/sysctl [ Warning ] /bin/bash [ Warning ] /bin/cat [ Warning ] /bin/chmod [ Warning ] /bin/chown [ Warning ] /bin/cp [ Warning ] /bin/date [ Warning ] /bin/df [ Warning ] /bin/dmesg [ Warning ] /bin/echo [ Warning ] /bin/ed [ OK ] /bin/egrep [ Warning ] /bin/fgrep [ Warning ] /bin/fuser [ OK ] /bin/grep [ Warning ] /bin/ip [ Warning ] /bin/kill [ Warning ] /bin/less [ OK ] /bin/login [ Warning ] /bin/ls [ Warning ] /bin/lsmod [ Warning ] /bin/mktemp [ Warning ] /bin/more [ Warning ] /bin/mount [ Warning ] /bin/mv [ Warning ] /bin/netstat [ Warning ] /bin/ping [ Warning ] /bin/ps [ Warning ] /bin/pwd [ Warning ] /bin/readlink [ Warning ] /bin/sed [ Warning ] /bin/sh [ Warning ] /bin/su [ Warning ] /bin/touch [ Warning ] /bin/uname [ Warning ] /bin/which [ OK ] /bin/kmod [ Warning ] /bin/dash [ Warning ] [Press <ENTER> to continue] Checking for rootkits... Performing check of known rootkit files and directories 55808 Trojan - Variant A [ Not found ] ADM Worm [ Not found ] AjaKit Rootkit [ Not found ] Adore Rootkit [ Not found ] aPa Kit [ Not found ] Apache Worm [ Not found ] Ambient (ark) Rootkit [ Not found ] Balaur Rootkit [ Not found ] BeastKit Rootkit [ Not found ] beX2 Rootkit [ Not found ] BOBKit Rootkit [ Not found ] cb Rootkit [ Not found ] CiNIK Worm (Slapper.B variant) [ Not found ] Danny-Boy's Abuse Kit [ Not found ] Devil RootKit [ Not found ] Dica-Kit Rootkit [ Not found ] Dreams Rootkit [ Not found ] Duarawkz Rootkit [ Not found ] Enye LKM [ Not found ] Flea Linux Rootkit [ Not found ] Fu Rootkit [ Not found ] Fuck`it Rootkit [ Not found ] GasKit Rootkit [ Not found ] Heroin LKM [ Not found ] HjC Kit [ Not found ] ignoKit Rootkit [ Not found ] IntoXonia-NG Rootkit [ Not found ] Irix Rootkit [ Not found ] Jynx Rootkit [ Not found ] KBeast Rootkit [ Not found ] Kitko Rootkit [ Not found ] Knark Rootkit [ Not found ] ld-linuxv.so Rootkit [ Not found ] Li0n Worm [ Not found ] Lockit / LJK2 Rootkit [ Not found ] Mood-NT Rootkit [ Not found ] MRK Rootkit [ Not found ] Ni0 Rootkit [ Not found ] Ohhara Rootkit [ Not found ] Optic Kit (Tux) Worm [ Not found ] Oz Rootkit [ Not found ] Phalanx Rootkit [ Not found ] Phalanx2 Rootkit [ Not found ] Phalanx2 Rootkit (extended tests) [ Not found ] Portacelo Rootkit [ Not found ] R3dstorm Toolkit [ Not found ] RH-Sharpe's Rootkit [ Not found ] RSHA's Rootkit [ Not found ] Scalper Worm [ Not found ] Sebek LKM [ Not found ] Shutdown Rootkit [ Not found ] SHV4 Rootkit [ Not found ] SHV5 Rootkit [ Not found ] Sin Rootkit [ Not found ] Slapper Worm [ Not found ] Sneakin Rootkit [ Not found ] 'Spanish' Rootkit [ Not found ] Suckit Rootkit [ Not found ] Superkit Rootkit [ Not found ] TBD (Telnet BackDoor) [ Not found ] TeLeKiT Rootkit [ Not found ] T0rn Rootkit [ Not found ] trNkit Rootkit [ Not found ] Trojanit Kit [ Not found ] Tuxtendo Rootkit [ Not found ] URK Rootkit [ Not found ] Vampire Rootkit [ Not found ] VcKit Rootkit [ Not found ] Volc Rootkit [ Not found ] Xzibit Rootkit [ Not found ] zaRwT.KiT Rootkit [ Not found ] ZK Rootkit [ Not found ] [Press <ENTER> to continue] Performing additional rootkit checks Suckit Rookit additional checks [ OK ] Checking for possible rootkit files and directories [ None found ] Checking for possible rootkit strings [ None found ] Performing malware checks Checking running processes for suspicious files [ None found ] Checking for login backdoors [ None found ] Checking for suspicious directories [ None found ] Checking for sniffer log files [ None found ] Performing Linux specific checks Checking loaded kernel modules [ OK ] Checking kernel module names [ OK ] [Press <ENTER> to continue] Checking the network... Performing checks on the network ports Checking for backdoor ports [ None found ] Checking for hidden ports [ Skipped ] Performing checks on the network interfaces Checking for promiscuous interfaces [ None found ] Checking the local host... Performing system boot checks Checking for local host name [ Found ] Checking for system startup files [ Found ] Checking system startup files for malware [ None found ] Performing group and account checks Checking for passwd file [ Found ] Checking for root equivalent (UID 0) accounts [ None found ] Checking for passwordless accounts [ None found ] Checking for passwd file changes [ Warning ] Checking for group file changes [ Warning ] Checking root account shell history files [ None found ] Performing system configuration file checks Checking for SSH configuration file [ Not found ] Checking for running syslog daemon [ Found ] Checking for syslog configuration file [ Found ] Checking if syslog remote logging is allowed [ Not allowed ] Performing filesystem checks Checking /dev for suspicious file types [ Warning ] Checking for hidden files and directories [ Warning ] [Press <ENTER> to continue] System checks summary ===================== File properties checks... Required commands check failed Files checked: 137 Suspect files: 122 Rootkit checks... Rootkits checked : 291 Possible rootkits: 0 Applications checks... All checks skipped The system checks took: 5 minutes and 11 seconds All results have been written to the log file (/var/log/rkhunter.log)

    Read the article

  • What is the meaning of this Web kit JavaScript warning? (Thickbox)?

    - by janoChen
    Webkit Develoepr tools says the following Javascript Error: Unsafe JavaScript attempt to access frame with URL file:///D:/wamp/www/projects/c1-marching-band2/instruments.html from frame with URL file:///D:/wamp/www/projects/c1-marching-band2/iframeModal.html?placeValuesBefore. Domains, protocols and ports must match. file:///D:/wamp/www/projects/c1-marching-band2/iframeModal.html?placeValuesBefore:15Uncaught TypeError: Property 'tb_remove' of object [object DOMWindow] is not a function I'm trying to close the Thickbox div (iframe) pressing a submit button. But I think there's something wrong with 'tb_remove'. (Just in Chrome) It doe work in their official page: http://jquery.com/demo/thickbox/

    Read the article

  • Canvas Animation Kit Experiment... ...how to clear the canvas?

    - by Ted Wong
    I can make a obj to use the canvas to draw like this: MyObj.myDiv = new Canvas($("effectDiv"), Setting.width, Setting.height); Then, I use this to draw a rectangle on the canvas: var c = new Rectangle(80, 80, { fill: [220, 40, 90] } ); var move = new Timeline; move.addKeyframe(0, { x: 0, y: 0 } ); c.addTimeline(move); MyObj.myDiv.append(c); But after I draw the rectangle, I want clear the canvas, but I don't know which method and how to do this... ... O...one more thing: it is the CAKE's web site: Link

    Read the article

  • How difficult is it for an old-school programmer to pick up an FPGA kit and make something useful wi

    - by JUST MY correct OPINION
    I'm an old, old, old coder. (How old? I've used paper tape in anger.) I've programmed in a lot of languages and under a lot of paradigms (spaghetti, structured, object-oriented, functional and a smattering of logical). I'm getting bored. FPGAs look interesting to me. I have the crazy notion of resurrecting some of the ancient hardware I worked on in the days using FPGAs. I know this can be done because I've seen PDP-10 and PDP-11 implementations in FPGAs. I'd like to do the same for a few machines that are perhaps not as popular as those two, however. While I am an old, old coder, what I am not is an electronics or computer systems engineer. I'll be learning from scratch if I go down this path. My question, therefore, is two-fold: How difficult will it be for this old dinosaur to pick up and learn FPGAs to the point that interesting (not necessarily practical -- more from a hobbyist perspective) projects can be made? What should I start with learning-wise to go down this path? I know where to get FPGA kits, but I haven't found anything like "FPGAs for Complete Dinosaurs" yet anywhere out there.

    Read the article

  • Why does the WCF 3.5 REST Starter Kit do this?

    - by Brandon
    I am setting up a REST endpoint that looks like the following: [WebInvoke(Method = "POST", UriTemplate = "?format=json", BodyStyle = WebMessageBodyStyle.WrappedRequest, ResponseFormat = WebMessageFormat.Json)] and [WebInvoke(Method = "DELETE", UriTemplate = "?token={token}&format=json", ResponseFormat = WebMessageFormat.Json)] The above throws the following error: UriTemplateTable does not support '?format=json' and '?token={token}&format=json' since they are not equivalent, but cannot be disambiguated because they have equivalent paths and the same common literal values for the query string. See the documentation for UriTemplateTable for more detail. I am not an expert at WCF, but I would imagine that it should map first by the HTTP Method and then by the URI Template. It appears to be backwards. If both of my URI templates are: ?token={token}&format=json This works because they are equivalent and it then appears to look at the HTTP Method where one is POST and the other is DELETE. Is REST supposed to work this way? Why are the URI Template Tables not being sorted first by HTTP Method and then by URI Template? This can cause some serious frustrations when 1 HTTP Method requires a parameter and another does not, or if I want to do optional parameters (e.g. if the 'format' parameter is not passed, default to XML).

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • What is the best book for the preparation of MCPD Exam 70-564 (Designing and Developing ASP.NET 3.5 Applications)?

    - by Steve Johnson
    Hi all, I have seen a couple of questions like this one and scanned through the answers but somehow the replies were not satisfactory or practical. So i wondered maybe people who have gone through it and may suggest a better approach for the preparation of this exam. Goal: My goal is actually NOT merely to pass that exam. I intend to actually master the skill. I have been into asp.net web development for approximately 1.5 years and I want to study something that really improves "Design and Development Skills" in Web Development in general and asp.net to be specific which i can put to use and build upon that. Please suggest a book that teaches professional Asp.Net design and development skills and approaches to quality development by taking through practice design scenarios and their solutions and through various case studies that involve design problems and their implemented solutions. Edit: I have found the Micorosoft training kits to be fairly interesting and helpful as these tend to increase knowledge. I have utilized a lot of things after getting a good explanation of things from the training kits. However, as far as Microsoft Training Kit for 70-564 is concerned, there are not a lot of good reviews about it. What i have read and searched on the net , the reviews on amazon and various forums, stack-exchange and experts-exchange, were more inclined to the conclusion that "Microsoft Training Kit for Exam 70-564 is not good. Its is not good as compared to other kits from Microsoft, like as compared to the training kit of Exam 70-562 or others." So i was looking for a proper book containing examples from practical world scenarios and case studies from which i can not only learn but also master the skills before wasting money of Microsoft Training Kit for Exam 70-564. Waiting for experts to provide a suitable advice.

    Read the article

  • IE9

    - by Kit Ong
    Yep Internet Explorer 9 is in the works even though IE8 is still relatively new. IE8 totally failed the infamous Acid3 Test, things have improved even with the early preview version of IE9, here's a link to test drive Internet Explorer 9 http://ie.microsoft.com/testdrive/

    Read the article

  • Resolution stuck after playing OpenGL game

    - by kit.yang
    I used to start the game,Frozen Throne (using wine) with the option of "-opengl".When I entered the game,the resolution will changed,and restored after exit the game. But this time a problem happened.The resolution can't restore although I restart my computer several times. Both the Ubuntu pane and login windows are exceptional. nvidia-settingsalso detect the resolution is "1024 x 768",But it seemed useless using this tool. Screenshot-NVIDIA X Server Settings: the result of xrandr: Screen 0: minimum 320 x 240, current 1024 x 768, maximum 1024 x 768 default connected 1024x768+0+0 0mm x 0mm 1024x768 50.0* 800x600 51.0 52.0 53.0 680x384 54.0 55.0 640x480 56.0 576x432 57.0 512x384 58.0 400x300 59.0 60.0 61.0 320x240 62.0 the configure of /etc/X11/xorg.conf: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 1.0 (buildd@yellow) Fri Apr 9 11:51:21 UTC 2010 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: builtin, VertRefresh source: builtin Identifier "Monitor0" VendorName "Unknown" ModelName "CRT-0" HorizSync 28.0 - 55.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Entry Graphics" EndSection Section "Screen" # Removed Option "metamodes" "1024x768 +0+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "TwinViewXineramaInfoOrder" "CRT-0" Option "metamodes" "1024x768_60 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • What does a well formed XML or Schema look like for InfoPath form creation?

    - by Keith Sirmons
    Howdy, Are there any resources out there that defines how a well formed XML or Schema may look like for InfoPath? When designing a new form, there is an option to base the new form off of an existing XML document or XML Schema as the data source. I am looking for any guidelines or rules that will help me make sure the structure of the XML file I use to create the form will work the best it could work. I am creating the XML structure for another project, but we want to make sure the XML that is created would be InfoPath friendly for possible future applications. Thank you, Keith

    Read the article

  • How to properly name record creation(insertion) datetime field ?

    - by alpav
    If I create a table with datetime default getdate() field that is intended to keep date&time of record insertion, which name is better to use for that field ? I like to use Created and I've seen people use DateCreated or CreateDate. Other possible candidates that I can think of are: CreatedDate, CreateTime, TimeCreated, CreateDateTime, DateTimeCreated, RecordCreated, Inserted, InsertedDate, ... From my point of view anything with Date inside name looks bad because it can be confused with date part in case if I have 2 fields: CreateDate,CreateTime, so I wonder if there are any specific recommendations/standards in that area based on real reasons, not just style, mood or consistency. Of course, if there are 100 existing tables and this is table 101 then I would use same naming convention as used in those 100 tables for the sake of consistency, but this question is about first table in first database in first server in first application.

    Read the article

  • The annoyed configuration of java-6-openjdk

    - by kit.yang
    I want to change the java environment to java-6-openjdk. /etc/environment: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" JAVA_HOME=/usr/lib/jvm/java-6-openjdk/ CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib java -version: java version "1.6.0_20" OpenJDK Runtime Environment (IcedTea6 1.9.5) (6b20-1.9.5-0ubuntu1~10.04.1) OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode) javac -version:javac 1.6.0_20 But in the shell: the echo $JAVA_HOME result is /usr/lib/jvm/java-6-sun-1.6.0.22 while the $CLASSPATH is /usr/lib/jvm/java-6-sun-1.6.0.22/lib. How to find the other files in which $JAVA_HOME & $CLASSPATH value is setted by the java-6-sun-1.6.0.22 location?

    Read the article

  • java class creation dynamically and make it accessible across the network different jvms i.e. serial

    - by inj.rav
    Hi. I have a requirement of creating java classes dynamically and make it accessible different jvms across the network. I tried to use reflection and javassist tool,but nothing worked. Let me explain the scenario we are using Coherence distributed cache. It has a power of doing aggregation/filtering in parallel across the cluster. For example if a class has [dynamic class] has amount variable and getAmount/setAmount methods. Then if we execute COHERENCE queries, it will start process in parallel across the cluster. I tried to create classes at run time by using javassist and reflection. I am able to access it from single JVM, but when I tried to access the same class from other jvm [through coherence cluster]. I am getting exception of class not found [as remote jvm is not having idea of this class].I can over come this by creating same class dynamically on remote jvm also and access the methods. But coherence in built methods/functions are not able to find the class. could some one help me on this matter

    Read the article

  • Using Windows Previous Versions to access ZFS Snapshots (July 14, 2009)

    - by user12612012
    The Previous Versions tab on the Windows desktop provides a straightforward, intuitive way for users to view or recover files from ZFS snapshots.  ZFS snapshots are read-only, point-in-time instances of a ZFS dataset, based on the same copy-on-write transactional model used throughout ZFS.  ZFS snapshots can be used to recover deleted files or previous versions of files and they are space efficient because unchanged data is shared between the file system and its snapshots.  Snapshots are available locally via the .zfs/snapshot directory and remotely via Previous Versions on the Windows desktop. Shadow Copies for Shared Folders was introduced with Windows Server 2003 but subsequently renamed to Previous Versions with the release of Windows Vista and Windows Server 2008.  Windows shadow copies, or snapshots, are based on the Volume Snapshot Service (VSS) and, as the [Shared Folders part of the] name implies, are accessible to clients via SMB shares, which is good news when using the Solaris CIFS Service.  And the nice thing is that no additional configuration is required - it "just works". On Windows clients, snapshots are accessible via the Previous Versions tab in Windows Explorer using the Shadow Copy client, which is available by default on Windows XP SP2 and later.  For Windows 2000 and pre-SP2 Windows XP, the client software is available for download from Microsoft: Shadow Copies for Shared Folders Client. Assuming that we already have a shared ZFS dataset, we can create ZFS snapshots and view them from a Windows client. zfs snapshot tank/home/administrator@snap101zfs snapshot tank/home/administrator@snap102 To view the snapshots on Windows, map the dataset on the client then right click on a folder or file and select Previous Versions.  Note that Windows will only display previous versions of objects that differ from the originals.  So you may have to modify files after creating a snapshot in order to see previous versions of those files. The screenshot above shows various snapshots in the Previous Versions window, created at different times.  On the left panel, the .zfs folder is visible, illustrating that this is a ZFS share.  The .zfs setting can be toggled as desired, it makes no difference when using previous versions.  To make the .zfs folder visible: zfs set snapdir=visible tank/home/administrator To hide the .zfs folder: zfs set snapdir=hidden tank/home/administrator The following screenshot shows the Previous Versions panel when a file has been selected.  In this case the user is prompted to view, copy or restore the file from one of the available snapshots. As can be seen from the screenshots above, the Previous Versions window doesn't display snapshot names: snapshots are listed by snapshot creation time, sorted in time order from most recent to oldest.  There's nothing we can do about this, it's the way that the interface works.  Perhaps one point of note, to avoid confusion, is that the ZFS snapshot creation time isnot the same as the root directory creation timestamp. In ZFS, all object attributes in the original dataset are preserved when a snapshot is taken, including the creation time of the root directory.  Thus the root directory creation timestamp is the time that the directory was created in the original dataset. # ls -d% all /home/administrator         timestamp: atime         Mar 19 15:40:23 2009         timestamp: ctime         Mar 19 15:40:58 2009         timestamp: mtime         Mar 19 15:40:58 2009         timestamp: crtime         Mar 19 15:18:34 2009 # ls -d% all /home/administrator/.zfs/snapshot/snap101         timestamp: atime         Mar 19 15:40:23 2009         timestamp: ctime         Mar 19 15:40:58 2009         timestamp: mtime         Mar 19 15:40:58 2009         timestamp: crtime         Mar 19 15:18:34 2009 The snapshot creation time can be obtained using the zfs command as shown below. # zfs get all tank/home/administrator@snap101NAME                             PROPERTY  VALUEtank/home/administrator@snap101  type      snapshottank/home/administrator@snap101  creation  Mon Mar 23 18:21 2009 In this example, the dataset was created on March 19th and the snapshot was created on March 23rd. In conclusion, Shadow Copies for Shared Folders provides a straightforward way for users to view or recover files from ZFS snapshots.  The Windows desktop provides an easy to use, intuitive GUI and no configuration is required to use or access previous versions of files or folders. REFERENCES FOR MORE INFORMATION ZFS ZFS Learning Center Introduction to Shadow Copies of Shared Folders Shadow Copies for Shared Folders Client

    Read the article

  • iPad2 - Yet Another Fundamental Defect in an Apple product

    - by Kit Ong
    First it was antenna defect in iPhone4 now it has been reported that some iPad 2 have display issues, Apple really needs to look at their manufacturing process. It doesn't help that workers are working like robots in their main supplier's factory Foxconn. More info on reported display light bleeding http://www.cultofmac.com/if-your-ipad-2-has-display-problems-do-not-return-it-heres-why/87197   How to check your iPad for dead pixel / light leak / bleed http://www.theipadguide.com/content/ipad-dead-pixel-test-how/7171269

    Read the article

  • Good piece of software that can manage the creation of complex web forms, including reporting, etc?

    - by Callum
    I have some clients who are requesting for some of their reasonably complex paper-based forms to be converted in to web forms. There's straight Q&A text input stuff, there's questions based around checkboxes, radio boxes, select boxes, maybe the occasional attached image, there's data that has to be entered in a tabular fashion, etc. I am deciding whether I should build a "platform" with properly normalised tables to store all types of form data. But before that, I thought I had better check and see if there is anything like that already on the market. I am looking for a product that can: * Easily create web forms of all types * Store all data in a database * Extensive reporting capability I have had a bit of a look around but there's not a whole lot I can see. Does anyone have any suggestions? Thanks.

    Read the article

  • Is there a name for a language feature that allows assignment/creation?

    - by Alex Mcp
    This is a bit hard for me to articulate, but in PHP you can say something like: $myArray['someindex'] = "my string"; and if there is no index named that, it will create/assign the value, and if there IS an index, it will overwrite the existing value. Compare this to Javascript where today I had to do checks like so: if (!myObject[key]) myObject[key] = "value"; I know this may be a bit of a picky point, but is there a name for the ability of PHP (and many other languages) to do these checks on their own as opposed to the more verbose (read: PITA) method of Javascript?

    Read the article

  • A polygon creation program, adjacent face ignoring not working right. Any solutions?

    - by user292767
    I'm working on a simple program that converts a 3d array into a polygon structure similar to voxels. It reads the array and creates cubes for positions with a value and checks adjacent directions (North,south,east,west,up,down) for a null value before setting up a cube's face. A link that displays the full code is below, written in GLBasic. Some snapshots to show you whats up. link text

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >