Search Results

Search found 8414 results on 337 pages for 'critical section'.

Page 44/337 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • How To: Automatically Remove www from a Domain in IIS7

    I recently moved the DevMavens.com site from one server to another and needed to ensure that the www.devmavens.com domain correctly redirected to simply devmavens.com.  This is important for SEO reasons (you dont want multiple domains to refer to the same content) and its generally better to use the shorter URL (www is so 20th century) rather than wasting 4 characters for zero gain. My friend and IIS guru Scott Forsyth pointed me to his blog post on how to set up IIS URL Rewriting.  To get started, you simply install IIS Rewrite from this link using the super awesome Web Platform Installer.  You should get something like this when youre done with the install: If you already have IIS Manager open, you may need to close it and re-open it before you see the URL Rewrite module.  Once you do, you should see it listed for any given Site under the IIS section: Double click on the URL Rewrite icon, and then choose the Add Rule(s) action.  You can simply create a blank rule, and name it Redirect from www to domain.com.  Essentially were following the instructions from Scott Forsyths post, but in reverse since hes showing how to add 4 useless characters to the URL and Im interested in removing them. After adding the name, well set the Match Url sections Using dropdown to Wildcards and specify a pattern of simply * to match anything. In the Conditions section we need to add a new condition with an Input of {HTTP_HOST} such that it should match the pattern www.devmavens.com (replace this with your domain). Ignore the Server Variables section. Set the action to Redirect and the Redirect URL to http://devmavens.com/{R:0} (replace with your domain).  The {R:0} will be replaced with whatever the user had entered.  So if they were going to http://www.devmavens.com/default.aspx theyll now be going to http://devmavens.com/default.aspx. The complete Inbound Rule should look like this: Thats it!  Test it out and make sure you havent accidentally used my exact URLs and started sending all of your users to devmavens.com! :)  Be sure to read Scotts post for more information on how to use regular expressions for your rules, and how to set them up via web.config rather than IIS manager. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Enabling "AllowDualLinkModes" in xorg.conf

    - by Gausie
    I'm using a GeForce GT 620 (which is dual-link compatable) and a DVI-I dual-link splitter to try to have a desktop extended onto two monitors. At the moment when I plug in the monitors, only the monitor on the female VGA marked "1" gets a signal, and the other screen gets nothing and only one screen is detected by nvidia-settings. After reading around, I have realised that my graphics card probably comes with dual-link disabled by default, and I can enable it by adding an "AllowDualLinkModes" option to my xorg.conf. This is the current state of my xorg.conf Section "Device" Identifier "Default Device" Option "NoLogo" "True" EndSection Where do I put the line about AllowDualLinkModes? Do I create a new Section? Have I misunderstood? Cheers Gausie

    Read the article

  • Unable to update/ install any files [closed]

    - by Surya
    Possible Duplicate: “Problem with MergeList” error when trying to do an update Just now I installed ubuntu 12.04 on my Lenovo G570 laptop. First I got an error at the time of installation (don't know about it) and I restarted the system and next time, it went well. So, after installing problems started.. There was a error with "Language recognition" and I tried to fix it but didn't work. I tried to install powerTop to check the status of power management. at terminal: sudo apt-get install powertop This is the error I got surya@surya-Lenovo-G570:~$ sudo apt-get powertop install [sudo] password for surya: E: Invalid operation powertop surya@surya-Lenovo-G570:~$ sudo apt-get install powertop Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages E: The package lists or status file could not be parsed or opened. surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ I downloaded Google Chrome .deb one and tried to install but its not working. Software center is opened and its not loading. There was a notification on the status bar which says: An error occurred please run the package manager from the right-click menu ... .... ... E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages "Copy & Paste" from terminal is not really working... When I press Ctrl + C; its showing ^C on terminal but its not working.. The most important error: I am unable to see a "chip" icon on the status bar so as to install proprietary drivers for my ATI drivers... The interesting part is, powertop worked will on live cd and it even detected my ATI card. Update When I opened "Software Up to Date", this showed a error: Could not initialize the package information An unresolvable problem occurred while initializing the package information. Please report this bug against the 'update-manager' package and include the following error message: 'E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages, E:The package lists or status file could not be parsed or opened.' : My laptop details Lenovo G570; Intel 2nd Gen i5 processor 4GB DDR3 RAM Intel in-build graphics + AMD Radeon HD 6370M 1GB graphics. I need help ASAP.

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Dig returns "status: REFUSED" for external queries?

    - by Mikey
    I can't seem to work out why my DNS isn't working properly, if I run dig from the nameserver it functions correctly: # dig ungl.org ; <<>> DiG 9.5.1-P2.1 <<>> ungl.org ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24585 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1 ;; QUESTION SECTION: ;ungl.org. IN A ;; ANSWER SECTION: ungl.org. 38400 IN A 188.165.34.72 ;; AUTHORITY SECTION: ungl.org. 38400 IN NS ns.kimsufi.com. ungl.org. 38400 IN NS r29901.ovh.net. ;; ADDITIONAL SECTION: ns.kimsufi.com. 85529 IN A 213.186.33.199 ;; Query time: 1 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Sat Mar 13 01:04:06 2010 ;; MSG SIZE rcvd: 114 but when I run it from another server in the same datacenter I receive: # dig @87.98.167.208 ungl.org ; <<>> DiG 9.5.1-P2.1 <<>> @87.98.167.208 ungl.org ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 18787 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;ungl.org. IN A ;; Query time: 1 msec ;; SERVER: 87.98.167.208#53(87.98.167.208) ;; WHEN: Sat Mar 13 01:01:35 2010 ;; MSG SIZE rcvd: 26 my zone file for this domain is $ttl 38400 ungl.org. IN SOA r29901.ovh.net. mikey.aol.com. ( 201003121 10800 3600 604800 38400 ) ungl.org. IN NS r29901.ovh.net. ungl.org. IN NS ns.kimsufi.com. ungl.org. IN A 188.165.34.72 localhost. IN A 127.0.0.1 www IN A 188.165.34.72 and the named.conf.options is default: options { directory "/var/cache/bind"; // If there is a firewall between you and nameservers you want // to talk to, you may need to fix the firewall to allow multiple // ports to talk. See http://www.kb.cert.org/vuls/id/800113 // If your ISP provided one or more IP addresses for stable // nameservers, you probably want to use them as forwarders. // Uncomment the following block, and insert the addresses replacing // the all-0's placeholder. // forwarders { // 0.0.0.0; // }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { ::1; }; listen-on { 127.0.0.1; }; allow-recursion { 127.0.0.1; }; }; named.conf.local: // // Do any local configuration here // // Consider adding the 1918 zones here, if they are not used in your // organization // include "/etc/bind/zones.rfc1918"; zone "eugl.eu" { type master; file "/etc/bind/eugl.eu"; notify no; }; zone "ungl.org" { type master; file "/etc/bind/ungl.org"; notify no; }; The server is running Ubuntu 9.10 and Bind 9, if anyone can shed some light on this for me it'd make me very happy! thanks

    Read the article

  • TableViewCell autorelease error

    - by iAm
    OK, for two days now i have been trying to solve an error i have inside the cellForRowAtIndex method, let start by saying that i have tracked down the bug to this method, the error is [CFDictionary image] or [Not a Type image] message sent to deallocated instance. I know about the debug flags, NSZombie, MallocStack, and others, they helped me narrow it down to this method and why, but I do not know how to solve besides a redesign of the app UI. SO what am i trying to do, well for this block of code, displays a purchase detail, which contains items, the items are in there own section, now when in edit mode, there appears a cell at the bottom of the items section with a label of "Add new Item", and this button will present a modal view of the add item controller, item is added and the view returns to the purchase detail screen, with the just added item in the section just above the "add new Item" cell, the problem happens when i scroll the item section off screen and back into view the app crashes with EXC_BAD_ACCESS, or even if i don't scroll and instead hit the back button on the navBar, still the same error. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = nil; switch (indexPath.section) { case PURCHASE_SECTION: { static NSString *cellID = @"GenericCell"; cell = [tableView dequeueReusableCellWithIdentifier:cellID]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleValue2 reuseIdentifier:cellID] autorelease]; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; } switch (indexPath.row) { case CATEGORY_ROW: cell.textLabel.text = @"Category:"; cell.detailTextLabel.text = [self.purchase.category valueForKey:@"name"]; cell.accessoryType = UITableViewCellAccessoryNone; cell.editingAccessoryType = UITableViewCellAccessoryDisclosureIndicator; break; case TYPE_ROW: cell.textLabel.text = @"Type:"; cell.detailTextLabel.text = [self.purchase.type valueForKey:@"name"]; cell.accessoryType = UITableViewCellAccessoryNone; cell.editingAccessoryType = UITableViewCellAccessoryDisclosureIndicator; break; case VENDOR_ROW: cell.textLabel.text = @"Payment:"; cell.detailTextLabel.text = [self.purchase.vendor valueForKey:@"name"]; cell.accessoryType = UITableViewCellAccessoryNone; cell.editingAccessoryType = UITableViewCellAccessoryDisclosureIndicator; break; case NOTES_ROW: cell.textLabel.text = @"Notes"; cell.editingAccessoryType = UITableViewCellAccessoryNone; break; default: break; } break; } case ITEMS_SECTION: { NSUInteger itemsCount = [items count]; if (indexPath.row < itemsCount) { static NSString *itemsCellID = @"ItemsCell"; cell = [tableView dequeueReusableCellWithIdentifier:itemsCellID]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:itemsCellID] autorelease]; cell.accessoryType = UITableViewCellAccessoryNone; } singleItem = [self.items objectAtIndex:indexPath.row]; cell.textLabel.text = singleItem.name; cell.detailTextLabel.text = [singleItem.amount formattedDataDisplay]; cell.imageView.image = [singleItem.image image]; } else { static NSString *AddItemCellID = @"AddItemCell"; cell = [tableView dequeueReusableCellWithIdentifier:AddItemCellID]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:AddItemCellID] autorelease]; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; } cell.textLabel.text = @"Add Item"; } break; } case LOCATION_SECTION: { static NSString *localID = @"LocationCell"; cell = [tableView dequeueReusableCellWithIdentifier:localID]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:localID] autorelease]; cell.accessoryType = UITableViewCellAccessoryNone; } cell.textLabel.text = @"Purchase Location"; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; cell.editingAccessoryType = UITableViewCellAccessoryNone; break; } default: break; } return cell; } the singleItem is of Modal Type PurchaseItem for core data now that i know what is causing the error, how do i solve it, I have tried everything that i know and some of what i dont know but still, no progress, please any suggestions as to how to solve this without redesign is my goal, perhaps there is an error i am doing that I cannot see, but if it's the nature of autorelease, than i will redesign.

    Read the article

  • Help Us Pick the First T-Shirt Design to Promote How-To Geek

    - by Eric Z Goodnight
    Here’s your first peek at How-To Geek’s new line of merchandise! Tell us what you think, what you like, and what you actually would like to own in this first round of potential HTG tee shirts. If you like the content here at HTG, we’d love to hear back from you—we made these for you, the readers, and we hope to hear what suits you and if you like what we’ve got to offer. Check them out, tell us your thoughts in the comments section (or send us emails at [email protected]) and fill out our poll in the section below.  The design that you like the most will be professionally printed and sold right here at howtogeek.com. Interested in buying one of them right now? Jump to the bottom to be notified when they’re ready for be bought and shipped right to you. How to Own Your Own Website (Even If You Can’t Build One) Pt 3 How to Sync Your Media Across Your Entire House with XBMC How to Own Your Own Website (Even If You Can’t Build One) Pt 2

    Read the article

  • Implement blogging system as a part of a web application/website

    - by Rana
    I am working on developing a website, where registered members will be able to write blog/tutorials and each registered site user will be a registered blogger automatically. Site news/announcements will be posted through it too. Now, will it be a wise idea to use wordpress for this purpose, or should I develop custom blogging system myself? Also, is there any impact if I use a subdomain for the blog section. user same domain (sub directory url) for the blog section. Looking forward to hear your feedback/comments/suggestions. Thanks.

    Read the article

  • JQGrdi PDF Export

    - by thanigai
    Originally posted on: http://geekswithblogs.net/thanigai/archive/2013/06/17/jqgrdi-pdf-export.aspxJQGrid PDF Export The aim of this article is to address the PDF export from client side grid frameworks. The solution is done using the ASP.Net MVC 4 and VisualStudio 2012. The article assumes the developer to have a fair amount of knowledge on ASP.Net MVC and C#. Tools Used Visual Studio 2012 ASP.Net MVC 4 Nuget Package Manager JQGrid  is one of the client grid framework built on top of the JQuery framework. It helps in building a beautiful grid with paging, sorting and exiting options. There are also other features available as extension plugins and developers can write their own if needed. You can download the JQgrid from the  JQGrid  homepage or as NUget package. I have given below the command to download the JQGrid through the package manager console. From the tools menu select “Library Package Manager” and then select “Package Manager Console”. I have given the screenshot below. This command will pull down the latest JQGrid package and adds them in the script folder. Once the script is downloaded and referenced in the project update the bundleconfig file to add the script reference in the pages. Bundleconfig can be found in the  App_Start  folder in the project structure. bundles .Add (newStyleBundle(“~/Content/jqgrid”).Include (“~/Content/ui.jqgrid.css”)); bundles.Add( newScriptBundle( “~/bundles/jquerygrid”) .Include( “~/Scripts/jqGrid/jquery.jqGrid*”)); Once added the config’s refer the bundles to the Views/Shared/LayoutPage.cshtml. Add the following lines to the head section of the page. @Styles.Render(“~/Content/jqgrid”) Add the following lines to the end of the page before html close tags. @Scripts.Render(“~/bundles/jquery”) @Scripts.Render(“~/bundles/jqueryui”) @Scripts.Render(“ ~/bundles/jquerygrid”)              That’s all to be done from the view perspective. Once these steps are done the developer can start coding for the JQGrid. In this example we will modify the HomeController for the demo. The index action will be the default action. We will add an argument for this index action. Let it be nullable bool. It’s just to mark the pdf request. In the Index.cshtml we will add a table tag with an id “ gridTable “. We will use this table for making the grid. Since JQGrid is an extension for the JQUery we will initialize the grid setting at the  script  section of the page. This script section is marked at the end of the page to improve performance. The script section is placed just below the bundle reference for JQuery and JQueryUI. This is the one of improvement factors from “ why slow” provided by yahoo. < tableid=“gridTable”class=“scroll”></ table> < inputtype=“button”value=“Export PDF”onclick=“exportPDF();“/>  @section scripts { <scripttype=“text/javascript”> $(document).ready(function(){$(“#gridTable”).jqGrid({datatype:“json”,url:‘@Url.Action(“GetCustomerDetails”)‘,mtype:‘GET’,colNames:["CustomerID","CustomerName","Location","PrimaryBusiness"],colModel:[{name:"CustomerID",width:40,index:"CustomerID",align:"center"},{name:"CustomerName",width:40,index:"CustomerName",align:"center"},{name:"Location",width:40,index:"Location",align:"center"},{name:"PrimaryBusiness",width:40,index:"PrimaryBusiness",align:"center"},],height:250,autowidth:true,sortorder:“asc”,rowNum:10,rowList:[5,10,15,20],sortname:“CustomerID”,viewrecords:true});});  function exportPDF (){ document . location = ‘ @ Url . Action ( “Index” ) ?pdf=true’ ; } </ script >  } The exportPDF methos just sets the document location to the Index action method with PDF Boolean as true just to mark for download PDF. An inmemory list collection is used for demo purpose. The  GetCustomerDetailsmethod is the server side action method that will provide the data as JSON list. We will see the method explanation below. [ HttpGet] publicJsonResultGetCustomerDetails(){ varresult=new { total=1, page=1, records=customerList.Count(), rows=( customerList.Select( e=>new { id=e.CustomerID, cell=newstring[]{ e.CustomerID.ToString(), e.CustomerName, e.Location, e.PrimaryBusiness}})) .ToArray()}; returnJson( result,  JsonRequestBehavior.AllowGet); }   JQGrid can understand the response data from server in certain format. The server method shown above is taking care of formatting the response so that JQGrid understand the data properly. The response data should contain totalpages, current page, full record count, rows of data with id and remaining columns as string array. The response is built using an anonymous object and will be sent as a MVC JsonResult. Since we are using HttpGet it’s better to mark the attribute as HttpGet and also the JSON requestbehavious as AllowGet. The inmemory list is initialized in the homecontroller constructor for reference. Public class HomeController : Controller{ private readonly Ilist < CustomerViewModel > customerList ; public HomeController (){ customerList=newList<CustomerViewModel>() { newCustomerViewModel{ CustomerID=100, CustomerName=“Sundar”, Location=“Chennai”, PrimaryBusiness=“Teacing”}, newCustomerViewModel{ CustomerID=101, CustomerName=“Sudhagar”, Location=“Chennai”, PrimaryBusiness=“Software”}, newCustomerViewModel{ CustomerID=102, CustomerName=“Thivagar”, Location=“China”, PrimaryBusiness=“SAP”}, }; }  publicActionResultIndex( bool?pdf){ if ( !pdf.HasValue){ returnView( customerList);} else{ stringfilePath=Server.MapPath( “Content”)  +“Sample.pdf”; ExportPDF( customerList,  new string[]{  “CustomerID”,  “CustomerName”,  “Location”,  “PrimaryBusiness” },  filePath); return File ( filePath ,  “application/pdf” , “list.pdf” ); }}   The index actionmethod has a Boolean argument named “pdf”. It’s used to indicate for PDF download. When the application starts this method is first hit for initial page request. For PDF operation a filename is generated and then sent to the  ExportPDF  method which will take care of generating the PDF from the datasource. The  ExportPDF method is listed below.  Private static void ExportPDF<TSource>(IList<TSource>customerList,string [] columns, string filePath){ FontheaderFont=FontFactory.GetFont( “Verdana”,  10,  Color.WHITE); Fontrowfont=FontFactory.GetFont( “Verdana”,  10,  Color.BLUE); Documentdocument=newDocument( PageSize.A4);  PdfWriter writer = PdfWriter . GetInstance ( document ,  new FileStream ( filePath ,  FileMode . OpenOrCreate )); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }  foreach  ( var item in customerList ) { foreach ( varcolumnincolumns){ stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } }  document.Add( table); document.Close(); }   iTextSharp is one of the pioneer in PDF export. It’s an opensource library readily available as NUget library. This command will pulldown latest available library. I am using the version 4.1.2.0. The latest version may have changed. There are three main things in this library. Document This is the document class which takes care of creating the document sheet with particular size. We have used A4 size. There is also an option to define the rectangle size. This document instance will be further used in next methods for reference. PdfWriter PdfWriter takes the filename and the document as the reference. This class enables the document class to generate the PDF content and save them in a file. Font Using the FONT class the developer can control the font features. Since I need a nice looking font I am giving the Verdana font. Following this PdfPTable and PdfPCell are used for generating the normal table layout. We have created two set of fonts for header and footer. Font headerFont=FontFactory .GetFont(“Verdana”, 10, Color .WHITE); Font rowfont=FontFactory .GetFont(“Verdana”, 10, Color .BLUE);   We are getting the header columns as string array. Columns argument array is looped and header is generated. We are using the headerfont for this purpose. PdfWriter writer=PdfWriter .GetInstance(document, newFileStream (filePath, FileMode.OpenOrCreate)); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }   Then reflection is used to generate the row wise details and form the grid. foreach  (var item in customerList){ foreach ( varcolumnincolumns) { stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } } document . Add ( table ); document . Close ();   Once the process id done the pdf table is added to the document and document is closed to write all the changes to the filepath given. Then the control moves to the controller which will take care of sending the response as a JSON result with a filename. If the file name is not given then the PDF will open in the same page otherwise a popup will open up asking whether to save the file or open file. Return File(filePath, “application/pdf”,“list.pdf”);   The final result screen is shown below. PDF file opened below to show the output. Conclusion: This is how the export pdf is done for JQGrid. The problem area that is addressed here is the clientside grid frameworks won’t support PDF’s export. In that time it’s better to have a fine grained control over the data and generated PDF. iTextSharp has helped us to achieve our goal.

    Read the article

  • Ubuntu Software Center will not load and cannot be removed

    - by Drew Z.
    My Ubuntu Software Center has stopped working and I have been trying to uninstall/re-install it. This is the error that I repeatedly keep receiving: drew@drew-Aspire-5750:~$ sudo apt-get remove software-center [sudo] password for drew: Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/dl.google.com_linux_chrome_deb_dists_stable_main_binary-i386_Packages E: The package lists or status file could not be parsed or opened. drew@drew-Aspire-5750:~$ sudo apt-get autoremove software-center Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/dl.google.com_linux_chrome_deb_dists_stable_main_binary-i386_Packages E: The package lists or status file could not be parsed or opened. drew@drew-Aspire-5750:~$ Any help would be gratefully appreciated.

    Read the article

  • ISACA Webcast follow up: Managing High Risk Access and Compliance with a Platform Approach to Privileged Account Management

    - by Darin Pendergraft
    Last week we presented how Oracle Privileged Account Manager (OPAM) could be used to manage high risk, privileged accounts.  If you missed the webcast, here is a link to the replay: ISACA replay archive (NOTE: you will need to use Internet Explorer to view the archive) For those of you that did join us on the call, you will know that I only had a little bit of time for Q&A, and was only able to answer a few of the questions that came in.  So I wanted to devote this blog to answering the outstanding questions.  Here they are. 1. Can OPAM track admin or DBA activity details during a password check-out session? Oracle Audit Vault is monitoring these activities which can be correlated to check-out events. 2. How would OPAM handle simultaneous requests? OPAM can be configured to allow for shared passwords.  By default sharing is turned off. 3. How long are the passwords valid?  Are the admins required to manually check them in? Password expiration can be configured and set in the password policy according to your corporate standards.  You can specify if you want forced check-in or not. 4. Can 2-factor authentication be used with OPAM? Yes - 2-factor integration with OPAM is provided by integration with Oracle Access Manager, and Oracle Adaptive Access Manager. 5. How do you control access to OPAM to ensure that OPAM admins don't override the functionality to access privileged accounts? OPAM provides separation of duties by using Admin Roles to manage access to targets and privileged accounts and to control which operations admins can perform. 6. How and where are the passwords stored in OPAM? OPAM uses Oracle Platform Security Services (OPSS) Credential Store Framework (CSF) to securely store passwords.  This is the same system used by Oracle Applications. 7. Does OPAM support hierarchical/level based privileges?  Is the log maintained for independent review/audit? Yes. OPAM uses the Fusion Middleware (FMW) Audit Framework to store all OPAM related events in a dedicated audit database.  8. Does OPAM support emergency access in the case where approvers are not available until later? Yes.  OPAM can be configured to release a password under a "break-glass" emergency scenario. 9. Does OPAM work with AIX? Yes supported UNIX version are listed in the "certified component section" of the UNIX connector guide at:http://docs.oracle.com/cd/E22999_01/doc.111/e17694/intro.htm#autoId0 10. Does OPAM integrate with Sun Identity Manager? Yes.  OPAM can be integrated with SIM using the REST  APIs.  OPAM has direct integration with Oracle Identity Manager 11gR2. 11. Is OPAM available today and what does it cost? Yes.  OPAM is available now.  Ask your Oracle Account Manager for pricing. 12. Can OPAM be used in SAP environments? Yes, supported SAP version are listed in the "certified component section" of the SAP  connector guide here: http://docs.oracle.com/cd/E22999_01/doc.111/e25327/intro.htm#autoId0 13. How would this product integrate, if at all, with access to a particular field in the DB that need additional security such as SSN's? OPAM can work with DB Vault and DB Firewall to provide the fine grained access control for databases. 14. Is VM supported? As a deployment platform Oracle VM is supported. For further details about supported Virtualization Technologies see Oracle Fusion Middleware Supported System configurations here: http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html 15. Where did this (OPAM) technology come from? OPAM was built by Oracle Engineering. 16. Are all Linux flavors supported?  How about BSD? BSD is not supported. For supported UNIX version see the "certified component section" of the UNIX connector guide http://docs.oracle.com/cd/E22999_01/doc.111/e17694/intro.htm#autoId0 17. What happens if users don't check passwords in at the end of a work task? In OPAM a time frame can be defined how long a password can be checked out. The security admin can force a check-in at any given time. 18. is MySQL supported? Yes, supported DB version are listed in the "certified component section" of the DB connector guide here: http://docs.oracle.com/cd/E22999_01/doc.111/e28315/intro.htm#BABGJJHA 19. What happens when OPAM crashes and you need to use the password? OPAM can be configured for high availability, but if required, OPAM data can be backed up/recovered.  See the OPAM admin guide. 20. Is OPAM Standalone product or does it leverage other components from IDM? OPAM can be run stand-alone, but will also leverage other IDM components

    Read the article

  • Apache URL Rewrite

    - by sgtbeano
    I'm trying and failing to get a URL rewrite working, firstly I'm doing it in the vhost declaration, is that right? What I'm trying to do is take any URL which has; view.php?id=[a 1 or multidigit number] and rewrite it to view.php?id=[number]&section=1 Any help would be greatly appreciated, thanks for looking. Okay, so I tried the suggestion below (thanks for that) and now have this in my vhost file but still no effect; NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin ######## DocumentRoot "########" ServerName ######## ErrorLog "logs\########.log <Directory "########"> DirectoryIndex index.php index.html AcceptPathInfo on Order allow,deny Allow from All </Directory> <Location /> RewriteEngine on RewriteRule ^/view.php?id=([0-9]*)$ /view.php?id=$1&section=1 [R] </Location> </VirtualHost> Any more suggestions? Thanks again

    Read the article

  • Oracle Enterprise Manager 12c Configuration Best Practices (Part 3 of 3)

    - by Bethany Lapaglia
    <span id="XinhaEditingPostion"></span>&amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; This is part 3 of a three-part blog series that summarizes the most commonly implemented configuration changes to improve performance and operation of a large Enterprise Manager 12c environment. A “large” environment is categorized by the number of agents, targets and users. See the Oracle Enterprise Manager Cloud Control Advanced Installation and Configuration Guide chapter on Sizing for more details on sizing your environment properly. Part 1 of this series covered recommended configuration changes for the OMS and Repository Part 2 covered recommended changes for the Weblogic server Part 3 covers general configuration recommendations and a few known issues The entire series can be found in the My Oracle Support note titled Oracle Enterprise Manager 12c Configuration Best Practices [1553342.1]. Configuration Recommendations Configure E-Mail Notifications for EM related Alerts In some environments, the notifications for events for different target types may be sent to different support teams (i.e. notifications on host targets may be sent to a platform support team). However, the EM application administrators should be well informed of any alerts or problems seen on the EM infrastructure components. Recommendation: Create a new Incident rule for monitoring all EM components and setup the notifications to be sent to the EM administrator(s). The notification methods available can create or update an incident, send an email or forward to an event connector. To setup the incident rule set follow the steps below. Note that each individual rule in the rule set can have different actions configured. 1.  To create an incident rule for monitoring the EM components, click on Setup / Incidents / Incident Rules. On the All Enterprise Rules page, click on the out-of-box rule called “Incident management Ruleset for all targets” and then click on the Actions drop down list and select “Create Like Rule Set…” 2. For the rule set name, enter a name such as MTM Ruleset. Under the Targets tab, select “All targets of types” and select “OMS and Repository” from the drop down list. This target type contains all of the key EM components (OMS servers, repository, domains, etc.) 3. Click on the Rules tab. To edit a rule, click on the rule name and click on Edit as seen below 4. Modify the following rules: a. Incident creation Rule for metric alerts i. Leave the Type set as is but change the Severity to add Warning by clicking on the drop down list and selecting “Warning”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications). Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. b. Incident creation Rule for target unreachable. i.   Leave the Type set as is but change the Target type to add OMS and Repository by clicking on the drop down list selecting “OMS and Repository”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications) Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. 5.  Modify the actions for any other rule as required and be sure to click the “Save” push button to save the rule set or all changes will be lost. Configure Out-of-Band Notifications for EM Agent Out-of-Band notifications act as a backup when there’s a complete EM outage or a repository database issue. This is configured on the agent of the OMS server and can be used to send emails or execute another script that would create a trouble ticket. It will send notifications about the following issues: • Repository Database down • All OMS are down • Repository side collection job that is broken or has an invalid schedule • Notification job that is broken or has an invalid schedule Recommendation: To setup Out-of-Band Notifications, refer to the MOS note “How To Setup Out Of Bound Email Notification In 12c” (Doc ID 1472854.1) Modify the Performance Test for the EM Console Service The EM Console Service has an out-of-box defined performance test that will be run to determine the status of this service. The test issues a request via an HTTP method to a specific URL. By default, the HTTP method used for this test is a GET but for performance reasons, should be changed to HEAD. The URL used for this request is set to point to a specific OMS server by default. If a multi-OMS system has been implemented and the OMS servers are behind a load balancer, then the URL in this section must be modified to point to the load balancer name instead of a specific server name. If this is not done and a portion of the infrastructure is down then the EM Console Service will show down as this test will fail. Recommendation: Modify the HTTP Method for the EM Console Service test and the URL if required following the detailed steps below. 1.  To create an incident rule for monitoring the EM components, click on Targets / Services. From the list of services, click on the EM Console Service. 2. On the EM Console Service page, click on the Test Performance tab. 3.  At the bottom of the page, click on the Web Transaction test called EM Console Service Test 4.  Click on the Service Tests and Beacons breadcrumb near the top of the page. 5.  Under the Service Tests section, make sure the EM Console Service Test is selected and click on the Edit push button. 6.  Under the Transaction section, make sure the Access Logout page transaction is selected and click on the Edit push button 7) Under the Request section, change the HTTP Method from the default of GET to the recommended value of HEAD. The URL in this section must be modified to point to the load balancer name instead of a specific server name if multi-OMSes have been implemented. Check for Known Issues Job Purge Repository Job is Shown as Down This issue is caused after upgrading EM from 12c to 12cR2. On the Repository page under Setup ? Manage Cloud Control ? Repository, the job called “Job Purge” is shown as down and the Next Scheduled Run is blank. Also, repvfy reports that this is a missing DBMS_SCHEDULER job. Recommendation: In EM 12cR2, the apply_purge_policies have been moved from the MGMT_JOB_ENGINE package to the EM_JOB_PURGE package. To remove this error, execute the commands below: $ repvfy verify core -test 2 -fix To confirm that the issue resolved, execute $ repvfy verify core -test 2 It can also be verified by refreshing the Job Service page in EM and check the status of the job, it should now be Up. Configure the Listener Targets in EM with the Listener Password (where required) EM will report this error every time it is encountered in the listener log file. In a RAC environment, typically the grid home and rdbms homes are owned by different OS users. The listener always runs from the grid home. Only the listener process owner can query or change the listener properties. The listener uses a password to allow other OS users (ex. the agent user) to query the listener process for parameters. EM has a default listener target metric that will query these properties. If the agent is not permitted to do this, the TNS incident (TNS-1190) will be logged in the listener’s log file. This means that the listener targets in EM also need to have this password set. Not doing so will cause many TNS incidents (TNS-1190). Below is a sample of this error from the listener log file: Recommendation: Set a listener password and include it in the configuration of the listener targets in EM For steps on setting the listener passwords, see MOS notes: 260986.1 , 427422.1

    Read the article

  • The concept of virtual host and DNS

    - by Subhransu
    I have a dedicated server and a mydomain.com (bought from a hosting company). I want to host a website from my dedicated server with the domain mydomain.com i.e. when I enter mydomain.com from browser it should point to the IP(let's say X.X.X.X) of dedicated server(and a particular folder inside it). I have some following queries: In Server I know I need to edit some of the files (like: host or hostname file) in the server but I do not know what exact file I need to edit. How to add a Site enable or Site available in apache2 ? In Hosting Company control Panel Which records to add (A or cname or anyother)? Where Should I add DNS(in dedicated server section or domain name section)? How it is going to affect the behaviour of the domain? in short the question is: How the virtual host works & how to add DNS?

    Read the article

  • Five C# Code Snippets

    A snippet is a small section of text or source code that can be inserted into the code of a program. Snippets provide an easy way to implement commonly used code or functions into a larger section of code. Instead of rewriting the same code over and over again, a programmer can save the code [...] Related posts:How To Obtain Environment Details With .NET 3.5 How-to: Easily Send Emails With .NET Understanding SMTP Status Codes ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ~/.bashrc return can only 'return' from a function or sourced script

    - by Timothy
    I am trying to setup a OpenStack box to have a look at OpenStack Object Storage (Swift). Looking through the web I found this link; http://swift.openstack.org/development_saio.html#loopback-section I followed the instructions line by line but stuck on step 7 in the "Getting the code and setting up test environment" section. When I execute ~/.bashrc I get; line 6: return: can only 'return' from a function or sourced script. Below is the Line 6 extract from ~/.bashrc. My first reaction is to comment this line out, but I dont know what it does. Can anyone help? #If not running interactively, dont't do anything [ -z "$PS1" ] && return I'm running Ubuntu 12.04 as a VM on Hyper-v if knowing that is useful.

    Read the article

  • Google Webmaster Tools Index Status is 0 but sitemap URL shows indexed

    - by DD.
    I've added my site to Google Webmaster tools www.medexpress.co.uk. The site was submitted a few weeks ago. The index status shows 0 but it shows 6 URLs have been indexed in the sitemaps section. If I search in google I can see that the site is indexed and several pages appear: https://www.google.co.uk/search?q=site%3Awww.medexpress.co.uk&oq=site%3Awww.medexpress.co.uk&sourceid=chrome&ie=UTF-8 My question is why is the index status 0 when the sitemap section shows several indexed pages and also the pages appear in the search engine.

    Read the article

  • NSFetchedResultsController: using of NSManagedObjectContext during update brings to crash

    - by Kentzo
    Here is the interface of my controller class: @interface ProjectListViewController : UITableViewController <NSFetchedResultsControllerDelegate> { NSFetchedResultsController *fetchedResultsController; NSManagedObjectContext *managedObjectContext; } @end I use following code to init fetchedResultsController: if (fetchedResultsController != nil) { return fetchedResultsController; } // Create and configure a fetch request with the Project entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Project" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // Create the sort descriptors array. NSSortDescriptor *projectIdDescriptor = [[NSSortDescriptor alloc] initWithKey:@"projectId" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:projectIdDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; // Create and initialize the fetch results controller. NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:nil]; self.fetchedResultsController = aFetchedResultsController; fetchedResultsController.delegate = self; As you can see, I am using the same managedObjectContext as defined in my controller class Here is an adoption of the NSFetchedResultsControllerDelegate protocol: - (void)controllerWillChangeContent:(NSFetchedResultsController *)controller { // The fetch controller is about to start sending change notifications, so prepare the table view for updates. [self.tableView beginUpdates]; } - (void)controller:(NSFetchedResultsController *)controller didChangeObject:(id)anObject atIndexPath:(NSIndexPath *)indexPath forChangeType:(NSFetchedResultsChangeType)type newIndexPath:(NSIndexPath *)newIndexPath { UITableView *tableView = self.tableView; switch(type) { case NSFetchedResultsChangeInsert: [tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:newIndexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeUpdate: [self _configureCell:(TDBadgedCell *)[tableView cellForRowAtIndexPath:indexPath] atIndexPath:indexPath]; break; case NSFetchedResultsChangeMove: if (newIndexPath != nil) { [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; [tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:newIndexPath] withRowAnimation:UITableViewRowAnimationFade]; } else { [tableView reloadSections:[NSIndexSet indexSetWithIndex:indexPath.section] withRowAnimation:UITableViewRowAnimationFade]; } break; } } - (void)controller:(NSFetchedResultsController *)controller didChangeSection:(id <NSFetchedResultsSectionInfo>)sectionInfo atIndex:(NSUInteger)sectionIndex forChangeType:(NSFetchedResultsChangeType)type { switch(type) { case NSFetchedResultsChangeInsert: [self.tableView insertSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [self.tableView deleteSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; } } - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { [self.tableView endUpdates]; } Inside of the _configureCell:atIndexPath: method I have following code: NSFetchRequest *issuesNumberRequest = [NSFetchRequest new]; NSEntityDescription *issueEntity = [NSEntityDescription entityForName:@"Issue" inManagedObjectContext:managedObjectContext]; [issuesNumberRequest setEntity:issueEntity]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"projectId == %@", project.projectId]; [issuesNumberRequest setPredicate:predicate]; NSUInteger issuesNumber = [managedObjectContext countForFetchRequest:issuesNumberRequest error:nil]; [issuesNumberRequest release]; I am using the managedObjectContext again. But when I am trying to insert new Project, app crashes with following exception: Assertion failure in -[UITableView _endCellAnimationsWithContext:], /SourceCache/UIKit_Sim/UIKit-984.38/UITableView.m:774 Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid update: invalid number of rows in section 0. The number of rows contained in an existing section after the update (4) must be equal to the number of rows contained in that section before the update (4), plus or minus the number of rows inserted or deleted from that section (1 inserted, 0 deleted).' Fortunately, I've found a workaround: if I create and use separate NSManagedObjectContext inside of the _configureCell:atIndexPath: method app won't crash! I only want to know, is this behavior correct or not?

    Read the article

  • Google Webmaster Tools Data Highlighter says "Failed to load data, please try again later"

    - by George Garside
    I seem to be unable to access the data highlighter in Google Webmaster Tools since I attempted to start a new highlight on a page. Clicking the red Start Highlighting button to open the tagger did nothing, so I refreshed. Now, the page loads without the middle content section, then a few seconds later shows the following error: Failed to load data, please try again later. I can't get any of the middle section to load, even the list of current pages/page sets that have been highlighted—this error shows. I thought it may be a Google service outage, but other sites' data highlighters work fine. It also seems coincidental that it stopped working after I attempted to start highlighting—I was able to list the existing pages and page sets fine before that, and still am able to access the service on other sites. I've tried clearing browser data and have tried Google Chrome as well—same problem. What's happened?

    Read the article

  • BPM ADF Task forms. Checking whether the current user is in a BPM Swimlane

    - by Christopher Karl Chan
    @page { margin: 0.79in } P { margin-bottom: 0.08in } --Focus So this blog entry will focus on BPM Swimlane roles and users from a ADF context. So we have an ADF Task Details Form and we are in the process of making it richer and dynamic in functionality. A common requirement could be to dynamically show different areas based on the user logged into the workspace. Perhaps even we want to know even what swim-lane role the user belongs to. It is is a little bit harder to achieve then one thinks unless you know the trick. The Challenge The tricky part here is that the ADF Task Details Form is in fact part of a separate J2EE application to the main workspace. So if you try to use Java or Expression Language to get the logged in user you will only find anonymous and none of the BPM Roles you will be expecting. So what to do? The Magic First add the BC4J Security library to your view project. Then Restart JDeveloper. Now find the web.xml file in the view project of your ADF Task Details Application and look for the JpsFilter section. Then add in the following section. <init-param> <param-name>application.name</param-name> <param-value>OracleBPMProcessRolesApp</param-value></init-param> This will link your application to that of the BPM workspace. Then in your dynamic part of your ADF form you can now check whether the user logged into the BPM Workspace belongs in a BPM swim-lane in any BPM process. The best way to do this is by using expression language in the JSF page itself. Here I am simply changing the rendered flag to either true or false and thereby hiding or showing a section. Perhaps you are re-using the same form for a task in an approver swim-lane and ordinary user swimlane. So we only want the approver to see this field. So call the built in function to check if the user is a member of the BPM swim-lane role. The name of the role must be of the syntax BPMProject.RoleName <af:outputText value="This will only be rendered when the user is part of the BPM Swimlane Role rendered="#{securityContext.userInRole['BPMProjectName.Rolename']}"/> Now you must redeploy your ADF Task Form project Now (in the image above) the text will ONLY get rendered in the Task Details Form only if the user logged into the workspace is a member of the swimlane Unsecure of the BPM project SimpleTask

    Read the article

  • Beginner Guide to User Styles for Firefox

    - by Asian Angel
    While the default styles for most websites are nice there may be times when you would love to tweak how things look. See how easy it can be to change how websites look with the Stylish Extension for Firefox. Note: Scripts from Userstyles.org can also be added to Greasemonkey if you have it installed. Getting Started After installing the extension you will be presented with a first run page. You may want to keep it open so that you can browse directly to the Userstyles.org website using the link in the upper left corner. In the lower right corner you will have a new Status Bar Icon. If you have used Greasemonkey before this icon works a little differently. It will be faded out due to no user style scripts being active at the moment. You can use either a left or right click to access the Context Menu. The user style script management section is also added into your Add-ons Management Window instead of being separate. When you reach the user style scripts homepage you can choose to either learn more about the extension & scripts or… Start hunting for lots of user style script goodness. There will be three convenient categories to get you jump-started if you wish. You could also conduct a search if you have something specific in mind. Here is some information directly from the website provided for your benefit. Notice the reference to using these scripts with Greasemonkey… This section shows you how the scripts have been categorized and can give you a better idea of how to search for something more specific. Finding & Installing Scripts For our example we decided to look at the Updated Styles Section”first. Based on the page number listing at the bottom there are a lot of scripts available to look through. Time to refine our search a little bit… Using the drop-down menu we selected site styles and entered Yahoo in the search blank. Needless to say 5 pages was a lot easier to look through than 828. We decided to install the Yahoo! Result Number Script. When you do find a script (or scripts) that you like simply click on the Install with Stylish Button. A small window will pop up giving you the opportunity to preview, proceed with the installation, edit the code, or cancel the process. Note: In our example the Preview Function did not work but it may be something particular to the script or our browser’s settings. If you decide to do some quick editing the window shown above will switch over to this one. To return to the previous window and install the user style script click on the Switch to Install Button. After installing the user style the green section in the script’s webpage will actually change to this message… Opening up the Add-ons Manager Window shows our new script ready to go. The script worked perfectly when we conducted a search at Yahoo…the Status Bar Icon also changed from faded out to full color (another indicator that everything is running nicely). Conclusion If you prefer a custom look for your favorite websites then you can have a lot of fun experimenting with different user style scripts. Note: See our article here for specialized How-To Geek User Style Scripts that can be added to your browser. Links Download the Stylish Extension (Mozilla Add-ons) Visit the Userstyles.org Website Install the Yahoo! Result Number User Style Similar Articles Productive Geek Tips Spice Up that Boring about:blank Page in FirefoxExpand the Add Bookmark Dialog in Firefox by DefaultEnjoy How-To Geek User Style Script GoodnessAuto-Hide Your Cluttered Firefox Status Bar ItemsBeginner Geek: Delete User Accounts in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins

    Read the article

  • The Windows 8 and Ubuntu 12.04 Dual Boot NIghtmare

    - by Steve
    I have done some research as to how to go about this dual-boot, and I am close, but I need some guidance with booting into Windows 8 (Ubuntu is installed). I have a Lenovo Ideapad y510p. I will go over what I have done to dual-boot this laptop, with windows 8 pre-installed, with Ubuntu 12.04: I followed every instruction to the letter for the 97-vote response here, and everything worked fine up until after the repair boot section: Installing on a Pre-Installed Windows 8 System (UEFI Supported) I ran into the following error upon restarting after the repair boot section: error: invalid arch independent elf magic. This error (a grub issue) disabled me from booting into Ubuntu :( After a little googling, I followed the instructions in the reactivating grub 2 section to resolve the error: http://kb.acronis.com/content/1686 I found a possible solution to fixing the Windows 8 boot issue, and tried it: http://webcache.googleusercontent.com/search?q=cache:i9JMyXzzRpYJ:askubuntu.com/questions/279275/dual-boot-problem-windows-8-ubuntu-12-04+&cd=1&hl=en&ct=clnk&gl=us&client=ubuntu I thought the above solution worked, but when I attempt to boot into Windows 8, I get the following missing file error: File: \Boot\BCD Status: 0xc000000e Info: The Boot Configuration Data for your PC is missing or contains errors. Here is some other information that may be useful: I have 3 partitions devoted to Ubuntu. The first, sda8, has a flag bios_grub (1049 kb). The second, sda9, is where everything else is (96.6 GB). The last, sda10, is for swap (8299 MB). My question is: How do I fix the boot configuration for Windows 8? Any help would be greatly appreciated :) Update 1: When I attempt to boot into UEFI mode, I get the following error: invalid arch independent elf magic (the same error I saw in step 2). Update 2: A useful link here I found: Dual booting Ubuntu 12.04: UEFI and Legacy So, this is my 4th time installing Ubuntu on the laptop, and it looks like I need to install it in UEFI mode. Should I scrap it all again, and reinstall? Or is there ANY way of salvaging my installation? At this point, I can't even boot into Windows (although I have an installation cd to fix the windows boot issue, that would ultimately screw over ubuntu). Update 3: After doing a little more browsing around, I found a cool way around this messy grub stuff, using rEFInd. Rod Smith's post here saved me! Installing ubuntu 12.04.02 in uefi mode Now, I am able to dual-boot Windows 8 and Ubuntu and boot into both operating systems :) I have another issue (relating to the boot configuration in the bios) that I will post as a separate question :)

    Read the article

  • Should CSS be listed on your resume under Languages?

    - by Sandeepan Nath
    I have some doubts like Whether CSS should be put under Languages or not? Although Wikipedia says Cascading Style Sheets (CSS) is a style sheet language ... But do they write CSS under the languages section of the resume, along with PHP, etc? Similarly what about HTML? I have some doubt and I don't want to sound like someone who is not aware of the trends. Just to give an example, currently I have the following languages,frameworks, technologies, etc. listed under the "Technical Expertise" section of my resume - Technical Expertise * Languages - Proficient - PHP 5, Javascript, HTML ?*,CSS ?*,Sass ?*. Beginner - Linux Bash. * Databases - MySQL 5. * Technologies - AJAX. * Frameworks/Libraries - Symfony, jQuery. * CMSes - Wordpress. Although my domain is Web-development/design, I welcome domain-agnostic answers which can provide some generic ideas/reasoning. I have seen, a lot of people messing up these sections (even more serious than my doubts :) ), putting things under wrong sub-headings and thus putting a big question mark on their understanding of those things. I don't know much about XML, Comet Technology etc. Considering those are included too, What things should be put under Languages? E.g. Should CSS be put under Languages? Please give some reasoning to support your views. Where should the others (XML, Comet, cURL etc. ) be put? I welcome some examples of how you put it. Or do you have an additional Keywords section where you write all the unsortables ? Considering a set of standards like W3C standards, etc. do you have a standards sub-heading? I guess I have put the contents of other sections Okay. But do let me know about your ideas and reasoning. After all, I understand there may not be a single answer to this, but let's see what is the trend. Thanks Updates Further, do you mention design patters you have used? Web Services etc.? Where do you mention SOAP, XML etc...

    Read the article

  • OSX Server 3, Mac clients binding to OD and Profile Manager failing

    - by dbf
    I've made a setup containing a Mac Mini with OSX Server 3 (Mavericks 10.9.2) using Open Directory and Profile Manager (Mail, etc all set up and working). Now the thing is, internally on the local network, everything works great. Clients can bind to the OD and the users are able to login. I can install trust and settings profiles (either custom or group profiles) and all services in the profiles mentioned are being configured correctly. I can log in and out, hump around and do it a 100 times on different macs with different users, it works. My goal is to make this service publicly. The domain is with a FQDN which I own, for simplicity let's say server.domain.com. Now the only way for me to bind the clients to the OD is using LDAP mapping RCF2307 (without SSL) and a DN suffix of dc=server,dc=domain,dc=com using the Directory Utility. The options from server, or open directory will throw several errors like Connection failed to node '/LDAPv3/server.domain.com (2100). First of all I don't really understand the problem why clients can't bind to the OD like it does locally, with and without SSL (all ports are open, literally all ports are open, not just 389,636 and 1640, wasn't sure if I was missing any). When the clients are using LDAP mapping RFC2307 to bind (without SSL only), clients are able to authenticate, login and even load the Trust profile. But every Settings profile will fail with a Debug Message: Unable to find GUID in user record OD or fail to install saying missing user identification. Is there any way to get this to work without RFC2307? Because there is quite some stuff missing when using RFC2307 and not pull the mapping from the server or use open directory. Is this setup even possible? Or should I use VPN to authenticate with the OD? The network setup is a Modem/Router (DHCP off) with WAN NATted to an Airport Extreme (Using DHCP+NAT). The AE does notify with a double NAT message but I haven't had any problems with it on any other service. So WAN - 192.168.2.220 (static), AE - 10.0.1.* (dhcp) Output of DIG from the outside using dig server.domain.com ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;server.domain.com. IN A ;; ANSWER SECTION: server.domain.com. 77 IN A 91.50.*.* (valid WAN IP) ;; SERVER 172.*.*.1#53(172.*.*.1) (iPhone) DIG locally from a client and server (same output) ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 0 ;; QUESTION SECTION: ;server.domain.com. IN A ;; ANSWER SECTION: server.domain.com. 10800 IN A 10.0.1.11 ;; AUTHORITY SECTION: server.domain.com. 10800 IN NS domain.com. (used for email send in relay) server.domain.com. 10800 IN NS server.domain.com. ;; SERVER 10.0.1.11#53(10.0.1.11) Are there any things I should check? Only have OSX. -- double NAT issue, plugged in the server directly on the Modem/Router with a static IP and issue remains. Guess that rules out the double NAT thing. -- changeip -checkhostname comes with There is nothing to change, e.g. success. Primary address = 10.0.1.11 Current HostName = server.domain.com DNS HostName = server.domain.com For now, I've made a workaround by using an admin account that forces a permanent VPN connection on boot. That means before it comes to the login, a connection is already made or underway. I will continue this post when I have more time, also locating all the necessary .log files of each application involved. I have some suspicions but have to debug a bit more when I have more time on my hands .. Unless, of course, I get sidetracked with having a life. Which is arguably not very likely. krypted.com

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >