Search Results

Search found 98288 results on 3932 pages for 'user interface'.

Page 321/3932 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • sftp and public keys

    - by Lizard
    I am trying to sftp into an a server hosted by someone else. To make sure this worked I did the standard sftp [email protected] i was promted with the password and that worked fine. I am setting up a cron script to send a file once a week so have given them our public key which they claim to have added to their authorized_keys file. I now try sftp [email protected] again and I am still prompted for a password, but now the password doesn't work... Connecting to [email protected]... [email protected]'s password: Permission denied, please try again. [email protected]'s password: Permission denied, please try again. [email protected]'s password: Permission denied (publickey,password). Couldn't read packet: Connection reset by peer I did notice however that if I simply pressed enter (no password) it logged me in fine... So here are my questions: Is there a way to check what privatekey/pulbickey pair my sftp connection is using? Is it possible to specify what key pair to use? If all is setup correctly (using correct key pair and added to authorized files) why am I being asked to enter a blank password? Thanks for your help in advance! UPDATE I have just run sftp -vvv [email protected] .... debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering public key: /root/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 277 debug2: input_userauth_pk_ok: SHA1 fp 45:1b:e7:b6:33:41:1c:bb:0f:e3:c1:0f:1b:b0:d5:e4:28:a3:3f:0e debug3: sign_and_send_pubkey debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey,password debug1: Trying private key: /root/.ssh/id_dsa debug3: no such identity: /root/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password It seems to suggest that it tries to use the public key... What am I missing?

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • How do Windows 7 encrypted files look like?

    - by Sean Farrell
    Ok this is kind of an odd question: How do Windows 7 (Home Premium) encrypted files look like "from the outside"? Now here is the story. An acquaintance of a freind of mine got a nasty virus / scareware. So I wiped out my PC technician cap and went to work on it. What I did was remove the drive from the laptop and put drive into my external drive bay. I scanned the drive and yes it was loaded with stuff. That basically cured the infection and I could start the system back up. To check if it cured the problem I wanted to see the system while running. There where two user accounts, on with a password and one without (both admin users !?!). So I logged into the unprotected user and cleaned up the residual issues, like proxy server to localhost in the browser config. Now I wanted to do the same for the password protected user. What I noticed that from my system and the unprotected user account the files of the protected user looked garbled. The files are something like 12 random alphanum chars, but the folders looked ok. Naive as was thought this might be how encrypted files looked "from the outside". (I never use Microsoft's own security features, so how would I know. TrueCrypt is one big blob.) Since the second user could not be reached, I though sod it and removed the password from the account. (That might have been a mistake, I know.) Now I did the same clean up tasks and all nice and fine; except for the files which where still "encrypted". So I looked into many Windows Encrypted Files recovery posts and not all hope is lost, since I should be able to extract the certificate and with the password regain access to the files. Also note that windows did "only" prompt me that removing the password would be insecure, not that access to encrypted files would be lost, like it is claimed in most recovery articles. Resetting the password did not help and I gave up for the night. The question that nagged me half of the last night was, what if the files are not encrypted, but the scare-ware encrypted / destroyed the files? I don't want to spend hours of work trying to recover files that are not recoverable. The ting is that the user does not remember turning it on and aren't the files marked in blue and the filename is readable? Many thanks for input from users who have more knowledge about WEF...

    Read the article

  • Can't validate mine, sudo nor root in Debian "Jessie" Gnome anymore?

    - by Janar
    I'm Debian beginner & GUI guy in a bit of trouble? Can't login as sudo/gksu/root/su nor as (main/super)user after removed user password via Gnome-user-settings. History of actions (Probably irrelevant though) Installed Debian "Jessie" GNU/Linux with xFce GUI (en-US) as only OS. HardWare is ThinkPad w510. Skipped root user password in setup, to get sudo for superuser easily. Logged in (as always had) with Gnome (3.4.x), not once with xFCE. (installed Xfce. Installed xFce only to achieve more control (easier management) over packages this way, to set-up gnome much more by mine likes. Added more jessie repros (same ones as in Wheesy stable by default but for Jessie as, Jessie only had repros for security updates by default). Installed lots of gtk(3) & gnome(3) based soft; (- restarted again after this) Installed propietary graphics card driver for mine nvidia quadro. (- restarted once again after that one) Installed more stuff related to mine work/school/devel. The actual problem Had a plan to restart again, but wanted to set up auto-login first, instead set user password to none (don't ask why / perhaps caused by being awake for a looooong time), noticed it, and set also to auto-login, but couldn't undo mine previous mistake to create new password for me. As mine password is set to none I would have expected that simply return in pass prompt for emty password field would do, but it won't authenticate. I tried Alt+F2 "gksu gedit" as well as: sudo wget "https://www.some-page.eu/file.ext" and "su" in terminals, none has applied (quite logical actually - as I'm sudoer and highest ranked super user, besides only user in computer). Current stand Everything worked & still works nice after this accident, besides this password prompts part. To spoked to log-out nor restart. Synaptic package-manager is still open with root rights (only one, that has left open prior to the issue and not closed since, just in case). Goggled for help and read some manuals/faqs/how-tos - mostly lead to sudoers file management, but not found one specifically for mine issue - so still not any smarter. Really hope, that I don't have to redo OS inst all over again, by just one stupid mistake. Thanks for your reply :-)

    Read the article

  • How do I call up values in PHP for user input in forms (radio buttons and selects)

    - by Derek
    Ok so my admin sets to edit a book which was created. I know how to bring in the values that were initially entered via a simple text field like 'bookname'. On the edit book page the book name field stores the currently assigned 'bookname' in the field (which is what I want! :) ) However I have other field types like selects and radio button entries...I'm having trouble calling in the already set value when the book was created. For example, there is a 'booklevel' field, which I have set as radio button entries as; Hard, Normal, and Easy. When the user goes to edit the book, I'm not too sure on how to have the current value drawn up (its stored as text) and the radio button being checked. I.e. 'Normal' is checked if this is what was set when the book was created. So far I have this as the code for the adding book level: <label>Book Level:</label> <label for="booklevel1" class="radio">Hard <input type="radio" name="booklevel" id="booklevel1" value="<?php echo 'Hard'; if (isset($_POST['booklevel'])); ?>"></label> <label for="booklevel2" class="radio">Medium<input type="radio" name="booklevel" id="booklevel2" value="<?php echo 'Normal'; if (isset($_POST['booklevel'])); ?>"></label> <label for="booklevel" class="radio">Low<input type="radio" name="booklevel" id="booklevel3" value="<?php echo 'Easy'; if (isset($_POST['booklevel'])); ?>"></label> This all works fine by the way when the user adds the book... But does anyone know how in my update book form, I can draw the value of what level has been set, and have the box checked?? To draw up the values in the text fields, I'm simply using: <?php echo $row['bookname']?> I also noticed a small issue when I call up the values for my Select options. I have the drop down select field display the currently set user (to read the book!), however, the drop down menu again displays the user in the list available options to select - basically meaning 2 of the same names appear in the list! Is there a way to eliminate the value of the SELECTED option? So far my setup for this is like: <select name="user_id" id="user_id"> <option value="<?php echo $row['user_id']?>" SELECTED><?php echo $row['fullname']?></option> <?php while($row = mysql_fetch_array($result)) { ?> <option value="<?php echo $row['user_id']?>"><?php echo $row['name']?></option> <?php } ?> </select> If anyone can help me I'll be very greatful. Sorry for the incredibly long question!! :)

    Read the article

  • Parallel processing slower than sequential?

    - by zebediah49
    EDIT: For anyone who stumbles upon this in the future: Imagemagick uses a MP library. It's faster to use available cores if they're around, but if you have parallel jobs, it's unhelpful. Do one of the following: do your jobs serially (with Imagemagick in parallel mode) set MAGICK_THREAD_LIMIT=1 for your invocation of the imagemagick binary in question. By making Imagemagick use only one thread, it slows down by 20-30% in my test cases, but meant I could run one job per core without issues, for a significant net increase in performance. Original question: While converting some images using ImageMagick, I noticed a somewhat strange effect. Using xargs was significantly slower than a standard for loop. Since xargs limited to a single process should act like a for loop, I tested that, and found it to be about the same. Thus, we have this demonstration. Quad core (AMD Athalon X4, 2.6GHz) Working entirely on a tempfs (16g ram total; no swap) No other major loads Results: /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 0m3.784s user 0m2.240s sys 0m0.230s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 0m9.097s user 0m28.020s sys 0m0.910s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 0m9.844s user 0m33.200s sys 0m1.270s Can anyone think of a reason why running two instances of this program takes more than twice as long in real time, and more than ten times as long in processor time to complete the same task? After that initial hit, more processes do not seem to have as significant of an effect. I thought it might have to do with disk seeking, so I did that test entirely in ram. Could it have something to do with how Convert works, and having more than one copy at once means it cannot use processor cache as efficiently or something? EDIT: When done with 1000x 769KB files, performance is as expected. Interesting. /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.679s user 5m6.980s sys 0m6.340s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.152s user 5m6.140s sys 0m6.530s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 2m7.578s user 5m35.410s sys 0m6.050s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 4 convert -auto-level real 1m36.959s user 5m48.900s sys 0m6.350s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 1m36.392s user 5m54.840s sys 0m5.650s

    Read the article

  • Metro: Creating an IndexedDbDataSource for WinJS

    - by Stephen.Walther
    The goal of this blog entry is to describe how you can create custom data sources which you can use with the controls in the WinJS library. In particular, I explain how you can create an IndexedDbDataSource which you can use to store and retrieve data from an IndexedDB database. If you want to skip ahead, and ignore all of the fascinating content in-between, I’ve included the complete code for the IndexedDbDataSource at the very bottom of this blog entry. What is IndexedDB? IndexedDB is a database in the browser. You can use the IndexedDB API with all modern browsers including Firefox, Chrome, and Internet Explorer 10. And, of course, you can use IndexedDB with Metro style apps written with JavaScript. If you need to persist data in a Metro style app written with JavaScript then IndexedDB is a good option. Each Metro app can only interact with its own IndexedDB databases. And, IndexedDB provides you with transactions, indices, and cursors – the elements of any modern database. An IndexedDB database might be different than the type of database that you normally use. An IndexedDB database is an object-oriented database and not a relational database. Instead of storing data in tables, you store data in object stores. You store JavaScript objects in an IndexedDB object store. You create new IndexedDB object stores by handling the upgradeneeded event when you attempt to open a connection to an IndexedDB database. For example, here’s how you would both open a connection to an existing database named TasksDB and create the TasksDB database when it does not already exist: var reqOpen = window.indexedDB.open(“TasksDB”, 2); reqOpen.onupgradeneeded = function (evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); }; reqOpen.onsuccess = function () { var db = reqOpen.result; // Do something with db }; When you call window.indexedDB.open(), and the database does not already exist, then the upgradeneeded event is raised. In the code above, the upgradeneeded handler creates a new object store named tasks. The new object store has an auto-increment column named id which acts as the primary key column. If the database already exists with the right version, and you call window.indexedDB.open(), then the success event is raised. At that point, you have an open connection to the existing database and you can start doing something with the database. You use asynchronous methods to interact with an IndexedDB database. For example, the following code illustrates how you would add a new object to the tasks object store: var transaction = db.transaction(“tasks”, “readwrite”); var reqAdd = transaction.objectStore(“tasks”).add({ name: “Feed the dog” }); reqAdd.onsuccess = function() { // Tasks added successfully }; The code above creates a new database transaction, adds a new task to the tasks object store, and handles the success event. If the new task gets added successfully then the success event is raised. Creating a WinJS IndexedDbDataSource The most powerful control in the WinJS library is the ListView control. This is the control that you use to display a collection of items. If you want to display data with a ListView control, you need to bind the control to a data source. The WinJS library includes two objects which you can use as a data source: the List object and the StorageDataSource object. The List object enables you to represent a JavaScript array as a data source and the StorageDataSource enables you to represent the file system as a data source. If you want to bind an IndexedDB database to a ListView then you have a choice. You can either dump the items from the IndexedDB database into a List object or you can create a custom data source. I explored the first approach in a previous blog entry. In this blog entry, I explain how you can create a custom IndexedDB data source. Implementing the IListDataSource Interface You create a custom data source by implementing the IListDataSource interface. This interface contains the contract for the methods which the ListView needs to interact with a data source. The easiest way to implement the IListDataSource interface is to derive a new object from the base VirtualizedDataSource object. The VirtualizedDataSource object requires a data adapter which implements the IListDataAdapter interface. Yes, because of the number of objects involved, this is a little confusing. Your code ends up looking something like this: var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); The code above is used to create a new class named IndexedDbDataSource which derives from the base VirtualizedDataSource class. In the constructor for the new class, the base class _baseDataSourceConstructor() method is called. A data adapter is passed to the _baseDataSourceConstructor() method. The code above creates a new method exposed by the IndexedDbDataSource named nuke(). The nuke() method deletes all of the objects from an object store. The code above also overrides a method named remove(). Our derived remove() method accepts any type of key and removes the matching item from the object store. Almost all of the work of creating a custom data source goes into building the data adapter class. The data adapter class implements the IListDataAdapter interface which contains the following methods: · change() · getCount() · insertAfter() · insertAtEnd() · insertAtStart() · insertBefore() · itemsFromDescription() · itemsFromEnd() · itemsFromIndex() · itemsFromKey() · itemsFromStart() · itemSignature() · moveAfter() · moveBefore() · moveToEnd() · moveToStart() · remove() · setNotificationHandler() · compareByIdentity Fortunately, you are not required to implement all of these methods. You only need to implement the methods that you actually need. In the case of the IndexedDbDataSource, I implemented the getCount(), itemsFromIndex(), insertAtEnd(), and remove() methods. If you are creating a read-only data source then you really only need to implement the getCount() and itemsFromIndex() methods. Implementing the getCount() Method The getCount() method returns the total number of items from the data source. So, if you are storing 10,000 items in an object store then this method would return the value 10,000. Here’s how I implemented the getCount() method: getCount: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore().then(function (store) { var reqCount = store.count(); reqCount.onerror = that._error; reqCount.onsuccess = function (evt) { complete(evt.target.result); }; }); }); } The first thing that you should notice is that the getCount() method returns a WinJS promise. This is a requirement. The getCount() method is asynchronous which is a good thing because all of the IndexedDB methods (at least the methods implemented in current browsers) are also asynchronous. The code above retrieves an object store and then uses the IndexedDB count() method to get a count of the items in the object store. The value is returned from the promise by calling complete(). Implementing the itemsFromIndex method When a ListView displays its items, it calls the itemsFromIndex() method. By default, it calls this method multiple times to get different ranges of items. Three parameters are passed to the itemsFromIndex() method: the requestIndex, countBefore, and countAfter parameters. The requestIndex indicates the index of the item from the database to show. The countBefore and countAfter parameters represent hints. These are integer values which represent the number of items before and after the requestIndex to retrieve. Again, these are only hints and you can return as many items before and after the request index as you please. Here’s how I implemented the itemsFromIndex method: itemsFromIndex: function (requestIndex, countBefore, countAfter) { var that = this; return new WinJS.Promise(function (complete, error) { that.getCount().then(function (count) { if (requestIndex >= count) { return WinJS.Promise.wrapError(new WinJS.ErrorFromName(WinJS.UI.FetchError.doesNotExist)); } var startIndex = Math.max(0, requestIndex - countBefore); var endIndex = Math.min(count, requestIndex + countAfter + 1); that._getObjectStore().then(function (store) { var index = 0; var items = []; var req = store.openCursor(); req.onerror = that._error; req.onsuccess = function (evt) { var cursor = evt.target.result; if (index < startIndex) { index = startIndex; cursor.advance(startIndex); return; } if (cursor && index < endIndex) { index++; items.push({ key: cursor.value[store.keyPath].toString(), data: cursor.value }); cursor.continue(); return; } results = { items: items, offset: requestIndex - startIndex, totalCount: count }; complete(results); }; }); }); }); } In the code above, a cursor is used to iterate through the objects in an object store. You fetch the next item in the cursor by calling either the cursor.continue() or cursor.advance() method. The continue() method moves forward by one object and the advance() method moves forward a specified number of objects. Each time you call continue() or advance(), the success event is raised again. If the cursor is null then you know that you have reached the end of the cursor and you can return the results. Some things to be careful about here. First, the return value from the itemsFromIndex() method must implement the IFetchResult interface. In particular, you must return an object which has an items, offset, and totalCount property. Second, each item in the items array must implement the IListItem interface. Each item should have a key and a data property. Implementing the insertAtEnd() Method When creating the IndexedDbDataSource, I wanted to go beyond creating a simple read-only data source and support inserting and deleting objects. If you want to support adding new items with your data source then you need to implement the insertAtEnd() method. Here’s how I implemented the insertAtEnd() method for the IndexedDbDataSource: insertAtEnd:function(unused, data) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function(store) { var reqAdd = store.add(data); reqAdd.onerror = that._error; reqAdd.onsuccess = function (evt) { var reqGet = store.get(evt.target.result); reqGet.onerror = that._error; reqGet.onsuccess = function (evt) { var newItem = { key:evt.target.result[store.keyPath].toString(), data:evt.target.result } complete(newItem); }; }; }); }); } When implementing the insertAtEnd() method, you need to be careful to return an object which implements the IItem interface. In particular, you should return an object that has a key and a data property. The key must be a string and it uniquely represents the new item added to the data source. The value of the data property represents the new item itself. Implementing the remove() Method Finally, you use the remove() method to remove an item from the data source. You call the remove() method with the key of the item which you want to remove. Implementing the remove() method in the case of the IndexedDbDataSource was a little tricky. The problem is that an IndexedDB object store uses an integer key and the VirtualizedDataSource requires a string key. For that reason, I needed to override the remove() method in the derived IndexedDbDataSource class like this: var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); When you call remove(), you end up calling a method of the IndexedDbDataAdapter named removeInternal() . Here’s what the removeInternal() method looks like: setNotificationHandler: function (notificationHandler) { this._notificationHandler = notificationHandler; }, removeInternal: function(key) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqDelete = store.delete (key); reqDelete.onerror = that._error; reqDelete.onsuccess = function (evt) { that._notificationHandler.removed(key.toString()); complete(); }; }); }); } The removeInternal() method calls the IndexedDB delete() method to delete an item from the object store. If the item is deleted successfully then the _notificationHandler.remove() method is called. Because we are not implementing the standard IListDataAdapter remove() method, we need to notify the data source (and the ListView control bound to the data source) that an item has been removed. The way that you notify the data source is by calling the _notificationHandler.remove() method. Notice that we get the _notificationHandler in the code above by implementing another method in the IListDataAdapter interface: the setNotificationHandler() method. You can raise the following types of notifications using the _notificationHandler: · beginNotifications() · changed() · endNotifications() · inserted() · invalidateAll() · moved() · removed() · reload() These methods are all part of the IListDataNotificationHandler interface in the WinJS library. Implementing the nuke() Method I wanted to implement a method which would remove all of the items from an object store. Therefore, I created a method named nuke() which calls the IndexedDB clear() method: nuke: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqClear = store.clear(); reqClear.onerror = that._error; reqClear.onsuccess = function (evt) { that._notificationHandler.reload(); complete(); }; }); }); } Notice that the nuke() method calls the _notificationHandler.reload() method to notify the ListView to reload all of the items from its data source. Because we are implementing a custom method here, we need to use the _notificationHandler to send an update. Using the IndexedDbDataSource To illustrate how you can use the IndexedDbDataSource, I created a simple task list app. You can add new tasks, delete existing tasks, and nuke all of the tasks. You delete an item by selecting an item (swipe or right-click) and clicking the Delete button. Here’s the HTML page which contains the ListView, the form for adding new tasks, and the buttons for deleting and nuking tasks: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>DataSources</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.1.0.RC/css/ui-dark.css" rel="stylesheet" /> <script src="//Microsoft.WinJS.1.0.RC/js/base.js"></script> <script src="//Microsoft.WinJS.1.0.RC/js/ui.js"></script> <!-- DataSources references --> <link href="indexedDb.css" rel="stylesheet" /> <script type="text/javascript" src="indexedDbDataSource.js"></script> <script src="indexedDb.js"></script> </head> <body> <div id="tmplTask" data-win-control="WinJS.Binding.Template"> <div class="taskItem"> Id: <span data-win-bind="innerText:id"></span> <br /><br /> Name: <span data-win-bind="innerText:name"></span> </div> </div> <div id="lvTasks" data-win-control="WinJS.UI.ListView" data-win-options="{ itemTemplate: select('#tmplTask'), selectionMode: 'single' }"></div> <form id="frmAdd"> <fieldset> <legend>Add Task</legend> <label>New Task</label> <input id="inputTaskName" required /> <button>Add</button> </fieldset> </form> <button id="btnNuke">Nuke</button> <button id="btnDelete">Delete</button> </body> </html> And here is the JavaScript code for the TaskList app: /// <reference path="//Microsoft.WinJS.1.0.RC/js/base.js" /> /// <reference path="//Microsoft.WinJS.1.0.RC/js/ui.js" /> function init() { WinJS.UI.processAll().done(function () { var lvTasks = document.getElementById("lvTasks").winControl; // Bind the ListView to its data source var tasksDataSource = new DataSources.IndexedDbDataSource("TasksDB", 1, "tasks", upgrade); lvTasks.itemDataSource = tasksDataSource; // Wire-up Add, Delete, Nuke buttons document.getElementById("frmAdd").addEventListener("submit", function (evt) { evt.preventDefault(); tasksDataSource.beginEdits(); tasksDataSource.insertAtEnd(null, { name: document.getElementById("inputTaskName").value }).done(function (newItem) { tasksDataSource.endEdits(); document.getElementById("frmAdd").reset(); lvTasks.ensureVisible(newItem.index); }); }); document.getElementById("btnDelete").addEventListener("click", function () { if (lvTasks.selection.count() == 1) { lvTasks.selection.getItems().done(function (items) { tasksDataSource.remove(items[0].data.id); }); } }); document.getElementById("btnNuke").addEventListener("click", function () { tasksDataSource.nuke(); }); // This method is called to initialize the IndexedDb database function upgrade(evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); } }); } document.addEventListener("DOMContentLoaded", init); The IndexedDbDataSource is created and bound to the ListView control with the following two lines of code: var tasksDataSource = new DataSources.IndexedDbDataSource("TasksDB", 1, "tasks", upgrade); lvTasks.itemDataSource = tasksDataSource; The IndexedDbDataSource is created with four parameters: the name of the database to create, the version of the database to create, the name of the object store to create, and a function which contains code to initialize the new database. The upgrade function creates a new object store named tasks with an auto-increment property named id: function upgrade(evt) { var newDB = evt.target.result; newDB.createObjectStore("tasks", { keyPath: "id", autoIncrement: true }); } The Complete Code for the IndexedDbDataSource Here’s the complete code for the IndexedDbDataSource: (function () { /************************************************ * The IndexedDBDataAdapter enables you to work * with a HTML5 IndexedDB database. *************************************************/ var IndexedDbDataAdapter = WinJS.Class.define( function (dbName, dbVersion, objectStoreName, upgrade, error) { this._dbName = dbName; // database name this._dbVersion = dbVersion; // database version this._objectStoreName = objectStoreName; // object store name this._upgrade = upgrade; // database upgrade script this._error = error || function (evt) { console.log(evt.message); }; }, { /******************************************* * IListDataAdapter Interface Methods ********************************************/ getCount: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore().then(function (store) { var reqCount = store.count(); reqCount.onerror = that._error; reqCount.onsuccess = function (evt) { complete(evt.target.result); }; }); }); }, itemsFromIndex: function (requestIndex, countBefore, countAfter) { var that = this; return new WinJS.Promise(function (complete, error) { that.getCount().then(function (count) { if (requestIndex >= count) { return WinJS.Promise.wrapError(new WinJS.ErrorFromName(WinJS.UI.FetchError.doesNotExist)); } var startIndex = Math.max(0, requestIndex - countBefore); var endIndex = Math.min(count, requestIndex + countAfter + 1); that._getObjectStore().then(function (store) { var index = 0; var items = []; var req = store.openCursor(); req.onerror = that._error; req.onsuccess = function (evt) { var cursor = evt.target.result; if (index < startIndex) { index = startIndex; cursor.advance(startIndex); return; } if (cursor && index < endIndex) { index++; items.push({ key: cursor.value[store.keyPath].toString(), data: cursor.value }); cursor.continue(); return; } results = { items: items, offset: requestIndex - startIndex, totalCount: count }; complete(results); }; }); }); }); }, insertAtEnd:function(unused, data) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function(store) { var reqAdd = store.add(data); reqAdd.onerror = that._error; reqAdd.onsuccess = function (evt) { var reqGet = store.get(evt.target.result); reqGet.onerror = that._error; reqGet.onsuccess = function (evt) { var newItem = { key:evt.target.result[store.keyPath].toString(), data:evt.target.result } complete(newItem); }; }; }); }); }, setNotificationHandler: function (notificationHandler) { this._notificationHandler = notificationHandler; }, /***************************************** * IndexedDbDataSource Method ******************************************/ removeInternal: function(key) { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqDelete = store.delete (key); reqDelete.onerror = that._error; reqDelete.onsuccess = function (evt) { that._notificationHandler.removed(key.toString()); complete(); }; }); }); }, nuke: function () { var that = this; return new WinJS.Promise(function (complete, error) { that._getObjectStore("readwrite").done(function (store) { var reqClear = store.clear(); reqClear.onerror = that._error; reqClear.onsuccess = function (evt) { that._notificationHandler.reload(); complete(); }; }); }); }, /******************************************* * Private Methods ********************************************/ _ensureDbOpen: function () { var that = this; // Try to get cached Db if (that._cachedDb) { return WinJS.Promise.wrap(that._cachedDb); } // Otherwise, open the database return new WinJS.Promise(function (complete, error, progress) { var reqOpen = window.indexedDB.open(that._dbName, that._dbVersion); reqOpen.onerror = function (evt) { error(); }; reqOpen.onupgradeneeded = function (evt) { that._upgrade(evt); that._notificationHandler.invalidateAll(); }; reqOpen.onsuccess = function () { that._cachedDb = reqOpen.result; complete(that._cachedDb); }; }); }, _getObjectStore: function (type) { type = type || "readonly"; var that = this; return new WinJS.Promise(function (complete, error) { that._ensureDbOpen().then(function (db) { var transaction = db.transaction(that._objectStoreName, type); complete(transaction.objectStore(that._objectStoreName)); }); }); }, _get: function (key) { return new WinJS.Promise(function (complete, error) { that._getObjectStore().done(function (store) { var reqGet = store.get(key); reqGet.onerror = that._error; reqGet.onsuccess = function (item) { complete(item); }; }); }); } } ); var IndexedDbDataSource = WinJS.Class.derive( WinJS.UI.VirtualizedDataSource, function (dbName, dbVersion, objectStoreName, upgrade, error) { this._adapter = new IndexedDbDataAdapter(dbName, dbVersion, objectStoreName, upgrade, error); this._baseDataSourceConstructor(this._adapter); }, { nuke: function () { this._adapter.nuke(); }, remove: function (key) { this._adapter.removeInternal(key); } } ); WinJS.Namespace.define("DataSources", { IndexedDbDataSource: IndexedDbDataSource }); })(); Summary In this blog post, I provided an overview of how you can create a new data source which you can use with the WinJS library. I described how you can create an IndexedDbDataSource which you can use to bind a ListView control to an IndexedDB database. While describing how you can create a custom data source, I explained how you can implement the IListDataAdapter interface. You also learned how to raise notifications — such as a removed or invalidateAll notification — by taking advantage of the methods of the IListDataNotificationHandler interface.

    Read the article

  • how do I resolve "user isn't assigned to any management roles" error in Exchange 2010 EMC?

    - by TheoJones
    Newly installed Exchange 2010 box (technically, a partially installed box, as this error is preventing me from completing the install). When I launch EMC or the Management Powershell, I get this error: VERBOSE: Connecting to myserver.mydomain.internal [myserver.mydomain.internal] Processing data from remote server failed with the following error message: The user "mydomain\administrator" isn't assigned to any management roles. For more information, see the about_Remote_Troubleshooting Help topic. Failed to connect to any Exchange Server in the current site. Thing is.. The logged in administrator account (confirmed using 'whoami') is a member of the following groups: Administrators Delegated Setup Discovery Management Domain Admins Domain Users Enterprise Admins Exchange Organization Administrators GPO Creator Owners Organization Management Schema Admins Server Management Any ideas? how can I get past this?

    Read the article

  • How to escape or remove double quotes in rsyslog template

    - by Evgeny
    I want rsyslog to write log messages in JSON format, which requires to use double-quotes (") around strings. Problem is that values sometime include double-quotes themselves, and those need to be escaped - but I can't figure out how to do that. Currently my rsyslog.conf contains this format that I use (a bit simplified): $template JsonFormat,"{\"msg\":\"%msg%\",\"app-name\":\"%app-name%\"}\n",sql But when a msg arrives that contains double quotes, the JSON is broken, example: user pid=21214 uid=0 auid=4294967295 msg='PAM setcred: user="oracle" exe="/bin/su" (hostname=?, addr=?, terminal=? result=Success)' turns into: {"msg":"user pid=21214 uid=0 auid=4294967295 msg='PAM setcred: user="oracle" exe="/bin/su" (hostname=?, addr=?, terminal=? result=Success)'","app-name":"user"} but what I need it to become is: {"msg":"user pid=21214 uid=0 auid=4294967295 msg='PAM setcred: user=\"oracle\" exe=\"/bin/su\" (hostname=?, addr=?, terminal=? result=Success)'","app-name":"user"}

    Read the article

  • Cannot connect to a VPN server - authentication failed with error code 691

    - by stacker
    When trying to connect to a VPN server, I get the 691 error code on the client, which say: Error Description: 691: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. I validated that the username and password are correct. I also installed a certification to use with the IKEv2 security type. I also validated that the VPN server support security method. But I cannot login. In the server log I get this log: Network Policy Server denied access to a user. The user DomainName\UserName connected from IP address but failed an authentication attempt due to the following reason: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. Any idea of what can I do? Thanks in advance! Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 12/29/2010 7:12:20 AM Event ID: 6273 Task Category: Network Policy Server Level: Information Keywords: Audit Failure User: N/A Computer: VPN.domain.com Description: Network Policy Server denied access to a user. Contact the Network Policy Server administrator for more information. User: Security ID: domain\Administrator Account Name: domain\Administrator Account Domain: domani Fully Qualified Account Name: domain.com/Users/Administrator Client Machine: Security ID: NULL SID Account Name: - Fully Qualified Account Name: - OS-Version: - Called Station Identifier: 192.168.147.171 Calling Station Identifier: 192.168.147.191 NAS: NAS IPv4 Address: - NAS IPv6 Address: - NAS Identifier: VPN NAS Port-Type: Virtual NAS Port: 0 RADIUS Client: Client Friendly Name: VPN Client IP Address: - Authentication Details: Connection Request Policy Name: Microsoft Routing and Remote Access Service Policy Network Policy Name: All Authentication Provider: Windows Authentication Server: VPN.domain.home Authentication Type: EAP EAP Type: Microsoft: Secured password (EAP-MSCHAP v2) Account Session Identifier: 313933 Logging Results: Accounting information was written to the local log file. Reason Code: 16 Reason: Authentication failed due to a user credentials mismatch. Either the user name provided does not map to an existing user account or the password was incorrect.

    Read the article

  • Exchange 2010: Receiving "You can't send a message on behalf of this user..." error when trying to configure delegate access

    - by Beaming Mel-Bin
    I am trying to give someone (John Doe) delegate access to another account (Jane Doe) in our test environment. However, I receive the following error from Exchange: *Subject:* Undeliverable: You have been designated as a delegate for Jane Dow *To:* Jane Dow Delivery has failed to these recipients or groups: John Doe You can't send a message on behalf of this user unless you have permission to do so. Please make sure you're sending on behalf of the correct sender, or request the necessary permission. If the problem continues, please contact your helpdesk. Diagnostic information for administrators: Generating server: /O=UNIONCO/OU=EXCHANGE ADMINISTRATIVE GROUP (FYD132341234)/CN=RECIPIENTS/CN=jdoe #MSEXCH:MSExchangeIS:/DC=local/DC=unionco:MAILBOX-1[578:0x000004DC:0x0000001D] #EX# Can someone help me troubleshoot this?

    Read the article

  • How to create a Linux user without a password but being able to set it?

    - by Leonid Shevtsov
    I have a username and an SSH key for a (hypothetical) guy and I need to give him admin access to a Linux (Ubuntu) server. I want him to be able to log in via SSH and then set his password by himself over a secure connection, instead of passing the password around. I know how to make the password expire and force him to reset it on first login. But this doesn't work unless he has some password already, which I then have to tell him. I thought about making the password blank - SSH wouldn't allow login, but then anyone can su into the user. My question is, is there some best practice to creating accounts in such a way? Or setting a default password is unavoidable?

    Read the article

  • Mac OS X behind OpenLDAP and Samba

    - by Sam Hammamy
    I have been battling for a week now to get my Mac (Mountain Lion) to authenticate on my home network's OpenLDAP and Samba. From several sources, like the Ubuntu community docs, and other blogs, and after a hell of a lot of trial and error and piecing things together, I have created a samba.ldif that will pass the smbldap-populate when combined with apple.ldif and I have a fully functional OpenLDAP server and a Samba PDC that uses LDAP to authenticate the OS X Machine. The problem is that when I login, the home directory is not created or pulled from the server. I get the following in system.log Sep 21 06:09:15 Sams-MacBook-Pro.local SecurityAgent[265]: User info context values set for sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got ruser: (null) Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got service: authorization Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): no authauth availale for user. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): failed: 7 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Failed to determine Kerberos principal name. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Done cleanup3 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Kerberos 5 refuses you Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): pam_sm_authenticate: ntlm Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_acct_mgmt(): OpenDirectory - Membership cache TTL set to 1800. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_record_check_pwpolicy(): retval: 0 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Establishing credentials Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Context initialised Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): pam_sm_setcred: ntlm user sam doesn't have auth authority All that's great and good and I authenticate. Then I get CFPreferences: user home directory for user kCFPreferencesCurrentUser at /Network/Servers/172.17.148.186/home/sam is unavailable. User domains will be volatile. Failed looking up user domain root; url='file://localhost/Network/Servers/172.17.148.186/home/sam/' path=/Network/Servers/172.17.148.186/home/sam/ err=-43 uid=9000 euid=9000 If you're wondering where /Network/Servers/IP/home/sam comes from, it's from a couple of blogs that said the OpenLDAP attribute apple-user-homeDirectory should have that value and the NFSHomeDirectory on the mac should point to apple-user-homeDirectory I also set the attr apple-user-homeurl to <home_dir><url>smb://172.17.148.186/sam/</url><path></path></home_dir> which I found on this forum. Any help is appreciated, because I'm banging my head against the wall at this point. By the way, I intend to create a blog on my vps just for this, and create an install script in python that people can download so no one has to go through what I've had to go through this week :) After some sleep I am going to try to login from a windows machine and report back here. Thanks Sam

    Read the article

  • Stop Cisco AnyConnect from locking down the NIC

    - by Johannes Rössel
    Cisco's VPN crapclients (including the AnyConnect one) have the nasty habit of clobbering all NICs on the system you're using them. The old client had a checkbox in the connection options that allowed you to use other network interfaces while being connected while the AnyConnect client doesn't have any options at all, seemingly. But they both lock down the network interface they are using to connect to the VPN. Since I am forced to use AnyConnect to actually have an internet connection and I like to control a second computer at home via RDP (over the same network interface so far) this doesn't quite work out. With the old client IPv6 still worked just fine, though AnyConnect seems to dislike that as well now. Is there any way to still use the same network interface for LAN access? I actually don't really care about any possible security implications (which might be why Cisco does this) as it's my freaking internet connection and not a secure way of working from home. The trade-off is quite different :-)

    Read the article

  • How to fix? => Your system administrator does not allow the user of saved credentials to log on to the remote computer

    - by Pure.Krome
    At our office, any of our Windows 7 Clients get this error message when we try and RDP to a remote W2K8 Server outside of the office :- Your system administrator does not allow the user of saved credentials to log on to the remote computer XXX because its identity is not fully verified. Please enter new credentials A quick google search leads to some posts they all suggest I edit group policy, etc. I'm under the impression, that the common fix for this, is to follow those instructions -per Windows7 machine-. Ack :( Is there anyway I can do something via our office Active Directory .. which auto updates all Windows 7 clients in the office LAN?

    Read the article

  • CIFS(Samba) + ACL = not working

    - by tst
    I have two servers with Debian 5.0. server1: samba 2:3.2.5-4lenny9 smbfs 2:3.2.5-4lenny9 smb.conf: [test] comment = test path = /var/www/_test/ browseable = no only guest = yes writable = yes printable = no create mask = 0644 directory mask = 0755 server1:~# mount | grep sda3 /dev/sda3 on /var/www type ext3 (rw,acl,user_xattr) # getfacl /var/www/_test/ # file: var/www/_test/ # owner: www-data # group: www-data user::rwx group::rwx other::r-x default:user::rwx default:user:www-data:rw- default:user:testuser:rw- default:group::rwx default:mask::rwx default:other::r-x server2: samba-common 2:3.2.5-4lenny9 smbfs 2:3.2.5-4lenny9 server2:~# mount.cifs //server1/test /media/smb/test -o rw,user_xattr,acl server2:~# mount | grep test //server1/test on /media/smb/test type cifs (rw,mand) server2:~# getfacl /media/smb/test/ # file: media/smb/test/ # owner: www-data # group: www-data user::rwx group::rwx other::r-x default:user::rwx default:user:www-data:rw- default:user:testuser:rw- default:group::rwx default:mask::rwx default:other::r-x And there is the problem: server2:~# su - testuser (reverse-i-search)`touch': touch 123 testuser@server2:~$ touch /media/smb/ testuser@server2:~$ touch /media/smb/test/123 touch: cannot touch `/media/smb/test/123': Permission denied Whats wrong?!

    Read the article

  • SSH service will not start on fresh Cygwin 1.7.15 install

    - by Coder6841
    OS: Windows 7 x64 Cygwin: 1.7.15-1 OpenSSH: 6.0p1-1 I'm attempting to install an SSH server on Windows 7. The tutorial that I'm following to do this is here: http://www.howtogeek.com/howto/41560/how-to-get-ssh-command-line-access-to-windows-7-using-cygwin/ The issue is that upon executing the net start sshd command I get the following output:The CYGWIN sshd service is starting. The CYGWIN sshd service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. Here is the full output of the setup: AdminUser@ThisComputer ~ $ ssh-host-config *** Info: Generating /etc/ssh_host_key *** Info: Generating /etc/ssh_host_rsa_key *** Info: Generating /etc/ssh_host_dsa_key *** Info: Generating /etc/ssh_host_ecdsa_key *** Info: Creating default /etc/ssh_config file *** Info: Creating default /etc/sshd_config file *** Info: Privilege separation is set to yes by default since OpenSSH 3.3. *** Info: However, this requires a non-privileged account called 'sshd'. *** Info: For more info on privilege separation read /usr/share/doc/openssh/README.privsep. *** Query: Should privilege separation be used? (yes/no) yes *** Info: Note that creating a new user requires that the current account have *** Info: Administrator privileges. Should this script attempt to create a *** Query: new local account 'sshd'? (yes/no) yes *** Info: Updating /etc/sshd_config file *** Query: Do you want to install sshd as a service? *** Query: (Say "no" if it is already installed as a service) (yes/no) yes *** Query: Enter the value of CYGWIN for the daemon: [] *** Info: On Windows Server 2003, Windows Vista, and above, the *** Info: SYSTEM account cannot setuid to other users -- a capability *** Info: sshd requires. You need to have or to create a privileged *** Info: account. This script will help you do so. *** Info: You appear to be running Windows XP 64bit, Windows 2003 Server, *** Info: or later. On these systems, it's not possible to use the LocalSystem *** Info: account for services that can change the user id without an *** Info: explicit password (such as passwordless logins [e.g. public key *** Info: authentication] via sshd). *** Info: If you want to enable that functionality, it's required to create *** Info: a new account with special privileges (unless a similar account *** Info: already exists). This account is then used to run these special *** Info: servers. *** Info: Note that creating a new user requires that the current account *** Info: have Administrator privileges itself. *** Info: No privileged account could be found. *** Info: This script plans to use 'cyg_server'. *** Info: 'cyg_server' will only be used by registered services. *** Query: Do you want to use a different name? (yes/no) no *** Query: Create new privileged user account 'cyg_server'? (yes/no) yes *** Info: Please enter a password for new user cyg_server. Please be sure *** Info: that this password matches the password rules given on your system. *** Info: Entering no password will exit the configuration. *** Query: Please enter the password: *** Query: Reenter: *** Info: User 'cyg_server' has been created with password '[CENSORED]'. *** Info: If you change the password, please remember also to change the *** Info: password for the installed services which use (or will soon use) *** Info: the 'cyg_server' account. *** Info: Also keep in mind that the user 'cyg_server' needs read permissions *** Info: on all users' relevant files for the services running as 'cyg_server'. *** Info: In particular, for the sshd server all users' .ssh/authorized_keys *** Info: files must have appropriate permissions to allow public key *** Info: authentication. (Re-)running ssh-user-config for each user will set *** Info: these permissions correctly. [Similar restrictions apply, for *** Info: instance, for .rhosts files if the rshd server is running, etc]. *** Info: The sshd service has been installed under the 'cyg_server' *** Info: account. To start the service now, call `net start sshd' or *** Info: `cygrunsrv -S sshd'. Otherwise, it will start automatically *** Info: after the next reboot. *** Info: Host configuration finished. Have fun! AdminUser@ThisComputer ~ $ net start sshd The CYGWIN sshd service is starting. The CYGWIN sshd service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. Note that on the line *** Query: Enter the value of CYGWIN for the daemon: [] I haven't entered anything. Tutorials often say to use ntsec or ntsec tty here but those options are removed from the latest version of OpenSSH. I've tried using them anyway and the result is the same. The file /var/log/sshd.log is empty. If I try just running the command /usr/sbin/sshd I get the output /var/empty must be owned by root and not group or world-writable.. The /var/empty directory has the following permissions: drwxr-xr-x+ 1 cyg_server root 0 May 29 15:28 empty. Google searches on this error did not turn up any working fixes. One person seems to have solved it by using the command chown SYSTEM /var/empty but that did not fix it in my case.

    Read the article

  • Why is GPO Tool reporting a GPO version mismatch when the GPO version #'s do match?

    - by SturdyErde
    Any ideas why the group policy diagnostic utility GPOTool would report a GPO version mismatch between two domain controllers if the version numbers are a match? Policy {GUID} Error: Version mismatch on dc1.domain.org, DS=65580, sysvol=65576 Friendly name: Default Domain Controllers Policy Error: Version mismatch on dc2.domain.org, DS=65580, sysvol=65576 Details: ------------------------------------------------------------ DC: dc1.domain.org Friendly name: Default Domain Controllers Policy Created: 7/7/2005 6:39:33 PM Changed: 6/18/2012 12:33:04 PM DS version: 1(user) 44(machine) Sysvol version: 1(user) 40(machine) Flags: 0 (user side enabled; machine side enabled) User extensions: not found Machine extensions: [{GUID}] Functionality version: 2 ------------------------------------------------------------ DC: dc2.domain.org Friendly name: Default Domain Controllers Policy Created: 7/7/2005 6:39:33 PM Changed: 6/18/2012 12:33:05 PM DS version: 1(user) 44(machine) Sysvol version: 1(user) 40(machine) Flags: 0 (user side enabled; machine side enabled) User extensions: not found Machine extensions: [{GUID}] Functionality version: 2

    Read the article

  • On MySQL 5.1 for Windows, why can't I assign DBA role to the "root" user?

    - by djangofan
    On MySQL 5.1 for Windows, why can't I assign DBA role to "root" user? The MySQL Workbench allows me to add all the other roles except for DBA. Also, when I "alter schema" on any table, while logged in as root, I dont see all the tabs that show me all the database properties... I only see the first tab that allows me to change collation only. What is wrong with this picture? How do i give root all priveleges? I've tried a few variations of GRANT ALL PRIVILEGES etc. from the command line but nothing works. My root account is unable to alter column names, indexes, or options of any given table that I create. I can create tables and delete them but I can't alter them.

    Read the article

  • How can I tell what user account is being used by a service to access a network share on a Windows 2008 server?

    - by Mike B
    I've got a third-party app/service running on a Windows 2003 SP2 server that is trying to fetch something from a network share on Windows 2008 box. Both boxes are members of an AD domain. For some reason, the app is complaining about having insufficient permissions to read/write to the store. The app itself doesn't have any special options for acting on the authority of another user account. It just asks for a UNC path. The service is running with a "log on as" setting of Local System account. I'd like to confirm what account it's using when trying to communicate with the network share. Conversely, I'd also like more details on if/why it's being rejected by the Windows 2008 network share. Are there server-side logs on 2008 that could tell me exactly why a connection attempt to a share was rejected?

    Read the article

  • HP to Cisco spanning tree root flapping

    - by Tim Brigham
    Per a recent question I recently configured both my HP (2x 2900) and Cisco (1x 3750) hardware to use MSTP for interoperability. I thought this was functional until I applied the change to the third device (HP switch 1 below) at which time the spanning tree root started flapping causing performance issues (5% packet loss) between my two HP switches. I'm not sure why. HP Switch 1 A4 connected to Cisco 1/0/1. HP Switch 2 B2 connected to Cisco 2/0/1. HP Switch 1 A2 connected to HP Switch 2 A1. I'd prefer the Cisco stack to act as the root. EDIT: There is one specific line - 'spanning-tree 1 path-cost 500000' in the HP switch 2 that I didn't add and was preexisting. I'm not sure if it could have the kind of impact that I'm describing. I'm more a security and monitoring guy then networking. EDIT 2: I'm starting to believe the problem lies in the fact that the value for my MST 0 instance on the Cisco is still at the default 32768. I worked up a diagram: This is based on every show command I could find for STP. I'll make this change after hours and see if it helps. Cisco 3750 Config: version 12.2 spanning-tree mode mst spanning-tree extend system-id spanning-tree mst configuration name mstp revision 1 instance 1 vlan 1, 40, 70, 100, 250 spanning-tree mst 1 priority 0 vlan internal allocation policy ascending interface TenGigabitEthernet1/1/1 switchport trunk encapsulation dot1q switchport mode trunk ! interface TenGigabitEthernet2/1/1 switchport trunk encapsulation dot1q switchport mode trunk ! interface Vlan1 no ip address ! interface Vlan100 ip address 192.168.100.253 255.255.255.0 ! Cisco 3750 show spanning tree: show spanning-tree MST0 Spanning tree enabled protocol mstp Root ID Priority 32768 Address 0004.ea84.5f80 Cost 200000 Port 53 (TenGigabitEthernet1/1/1) Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority 32768 (priority 32768 sys-id-ext 0) Address a44c.11a6.7c80 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Interface Role Sts Cost Prio.Nbr Type ------------------- ---- --- --------- -------- -------------------------------- Te1/1/1 Root FWD 2000 128.53 P2p MST1 Spanning tree enabled protocol mstp Root ID Priority 1 Address a44c.11a6.7c80 This bridge is the root Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority 1 (priority 0 sys-id-ext 1) Address a44c.11a6.7c80 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Interface Role Sts Cost Prio.Nbr Type ------------------- ---- --- --------- -------- -------------------------------- Te1/1/1 Desg FWD 2000 128.53 P2p Cisco 3750 show logging: %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan1, changed state to down %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan100, changed state to down %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan1, changed state to up %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan100, changed state to up %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan1, changed state to down %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan1, changed state to up HP Switch 1: ; J9049A Configuration Editor; Created on release #T.13.71 vlan 1 name "DEFAULT_VLAN" untagged 1-8,10,13-16,18-23,A1-A4 ip address 100.100.100.17 255.255.255.0 no untagged 9,11-12,17,24 exit vlan 100 name "192.168.100" untagged 9,11-12,17,24 tagged 1-8,10,13-16,18-23,A1-A4 no ip address exit vlan 21 name "Users_2" tagged 1,A1-A4 no ip address exit vlan 40 name "Cafe" tagged 1,4,7,A1-A4 no ip address exit vlan 250 name "Firewall" tagged 1,4,7,A1-A4 no ip address exit vlan 70 name "DMZ" tagged 1,4,7-8,13,A1-A4 no ip address exit spanning-tree spanning-tree config-name "mstp" spanning-tree config-revision 1 spanning-tree instance 1 vlan 1 40 70 100 250 password manager password operator HP Switch 1 show spanning tree: show spanning-tree Multiple Spanning Tree (MST) Information STP Enabled : Yes Force Version : MSTP-operation IST Mapped VLANs : 2-39,41-69,71-99,101-249,251-4094 Switch MAC Address : 0021f7-126580 Switch Priority : 32768 Max Age : 20 Max Hops : 20 Forward Delay : 15 Topology Change Count : 363,490 Time Since Last Change : 14 hours CST Root MAC Address : 0004ea-845f80 CST Root Priority : 32768 CST Root Path Cost : 200000 CST Root Port : 1 IST Regional Root MAC Address : 0021f7-126580 IST Regional Root Priority : 32768 IST Regional Root Path Cost : 0 IST Remaining Hops : 20 Root Guard Ports : TCN Guard Ports : BPDU Protected Ports : BPDU Filtered Ports : PVST Protected Ports : PVST Filtered Ports : | Prio | Designated Hello Port Type | Cost rity State | Bridge Time PtP Edge ----- --------- + --------- ---- ---------- + ------------- ---- --- ---- A1 | Auto 128 Disabled | A2 10GbE-CX4 | 2000 128 Forwarding | 0021f7-126580 2 Yes No A3 10GbE-CX4 | Auto 128 Disabled | A4 10GbE-SR | Auto 128 Disabled | HP Switch 1 Logging: I removed the date / time fields since they are inaccurate (no NTP configured on these switches) 00839 stp: MSTI 1 Root changed from 0:a44c11-a67c80 to 32768:0021f7-126580 00839 stp: MSTI 1 Root changed from 32768:0021f7-126580 to 0:a44c11-a67c80 00842 stp: MSTI 1 starved for an MSTI Msg Rx on port A4 from 0:a44c11-a67c80 00839 stp: MSTI 1 Root changed from 0:a44c11-a67c80 to 32768:0021f7-126580 00839 stp: MSTI 1 Root changed from 32768:0021f7-126580 to 0:a44c11-a67c80 00839 stp: MSTI 1 Root changed from 0:a44c11-a67c80 to ... HP Switch 2 Configuration: ; J9146A Configuration Editor; Created on release #W.14.49 vlan 1 name "DEFAULT_VLAN" untagged 1,3-17,21-24,A1-A2,B2 ip address 100.100.100.36 255.255.255.0 no untagged 2,18-20,B1 exit vlan 100 name "192.168.100" untagged 2,18-20 tagged 1,3-17,21-24,A1-A2,B1-B2 no ip address exit vlan 21 name "Users_2" tagged 1,A1-A2,B2 no ip address exit vlan 40 name "Cafe" tagged 1,13-14,16,A1-A2,B2 no ip address exit vlan 250 name "Firewall" tagged 1,13-14,16,A1-A2,B2 no ip address exit vlan 70 name "DMZ" tagged 1,13-14,16,A1-A2,B2 no ip address exit logging 192.168.100.18 spanning-tree spanning-tree 1 path-cost 500000 spanning-tree config-name "mstp" spanning-tree config-revision 1 spanning-tree instance 1 vlan 1 40 70 100 250 HP Switch 2 Spanning Tree: show spanning-tree Multiple Spanning Tree (MST) Information STP Enabled : Yes Force Version : MSTP-operation IST Mapped VLANs : 2-39,41-69,71-99,101-249,251-4094 Switch MAC Address : 0024a8-cd6000 Switch Priority : 32768 Max Age : 20 Max Hops : 20 Forward Delay : 15 Topology Change Count : 21,793 Time Since Last Change : 14 hours CST Root MAC Address : 0004ea-845f80 CST Root Priority : 32768 CST Root Path Cost : 200000 CST Root Port : A1 IST Regional Root MAC Address : 0021f7-126580 IST Regional Root Priority : 32768 IST Regional Root Path Cost : 2000 IST Remaining Hops : 19 Root Guard Ports : TCN Guard Ports : BPDU Protected Ports : BPDU Filtered Ports : PVST Protected Ports : PVST Filtered Ports : | Prio | Designated Hello Port Type | Cost rity State | Bridge Time PtP Edge ----- --------- + --------- ---- ---------- + ------------- ---- --- ---- A1 10GbE-CX4 | 2000 128 Forwarding | 0021f7-126580 2 Yes No A2 10GbE-CX4 | Auto 128 Disabled | B1 SFP+SR | 2000 128 Forwarding | 0024a8-cd6000 2 Yes No B2 | Auto 128 Disabled | HP Switch 2 Logging: I removed the date / time fields since they are inaccurate (no NTP configured on these switches) 00839 stp: CST Root changed from 32768:0021f7-126580 to 32768:0004ea-845f80 00839 stp: IST Root changed from 32768:0021f7-126580 to 32768:0024a8-cd6000 00839 stp: CST Root changed from 32768:0004ea-845f80 to 32768:0024a8-cd6000 00839 stp: CST Root changed from 32768:0024a8-cd6000 to 32768:0004ea-845f80 00839 stp: CST Root changed from 32768:0004ea-845f80 to 32768:0024a8-cd6000 00435 ports: port B1 is Blocked by STP 00839 stp: CST Root changed from 32768:0024a8-cd6000 to 32768:0021f7-126580 00839 stp: IST Root changed from 32768:0024a8-cd6000 to 32768:0021f7-126580 00839 stp: CST Root changed from 32768:0021f7-126580 to 32768:0004ea-845f80

    Read the article

  • What's the best way of handling permissions for apache2's user www-data in /var/www ?

    - by gyaresu
    Has anyone got a nice solution for handling files in /var/www/ ? We're running Name Based Virtual Hosts and the apache2 user is 'www-data' We've got two regular users & root. So when messing with files in /var/www ,rather than having to... chown -R www-data:www-data ...all the time, what's a good way of handling this? Supplementary question. How hardcore do you then go on permissions? This one has always been a problem in collaborative development environments. Cheers.

    Read the article

  • How to alter mail-from postfix?

    - by Simon
    Hi all, do anyone know how to alter the mail-from of a postfix mail server? Example, I have a postfix mail server which sends mail for the domain example.org. When a linux user, whose account is user.example.org (mapped in postfix/virtual to [email protected]), try to send an email, its mail from is user[email protected]. HELO hostname: server.hostname.org Source IP: one ip here mail-from: user[email protected] Problems: user.example.org instead of just user. server.hostname.org instead of just example.org. Desired mail-from: [email protected]. This is causing me problems with SPF records for example (example.org differs from server.hostname.org)... any idea of what can be the problem? Thanks in advance, Simon.

    Read the article

  • User Friendly port knocker (port knocking client) for Windows?

    - by Ekevoo
    It seems "It's me" is the most popular port knocking client for windows… Except… it sucks. It works for console-savvy users such as me, but, unsurprisingly, all my users hate console windows. I know better than to force it upon them. I would love to have a nice port knocker for Windows that would be windowed, have launchers, and be easily provisionable (i.e. I tell my user to paste some settings or import some file by double clicking it). To be honest, just not being console-based would be enough.

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >