Search Results

Search found 27016 results on 1081 pages for 'entry point'.

Page 547/1081 | < Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >

  • Pluscom MAC address shows up but "wireless unavailable"

    - by Luigi
    I've got a pluscom pci card plugged into my desktop and although the mac address shows up in network manager, it says "Wireless unavailable". I'm using Ubuntu 12.10 Gnome Shell Remix... edgar@edgarQuantal:~$ iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off edgar@edgarQuantal:~$ ifconfig eth0 Link encap:Ethernet HWaddr 6c:f0:49:02:1f:b8 inet addr:192.168.0.7 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::6ef0:49ff:fe02:1fb8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1425 errors:0 dropped:0 overruns:0 frame:0 TX packets:1393 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:995753 (995.7 KB) TX bytes:134079 (134.0 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:363 errors:0 dropped:0 overruns:0 frame:0 TX packets:363 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:40828 (40.8 KB) TX bytes:40828 (40.8 KB) It looks the same as the problem as here, seems quite complicated: http://ubuntuforums.org/showthread.php?t=1993226 Tried downloading the driver but i get this: "Unpacking firmware-ralink (from .../firmware-ralink_0.36_all.deb) ... dpkg: error processing /home/edgar/Downloads/firmware-ralink_0.36_all.deb (--install): trying to overwrite '/lib/firmware/rt2870.bin', which is also in package linux-firmware 1.95"

    Read the article

  • Rewrite rule Mod_Proxy truncate file name [duplicate]

    - by Valerio Cicero
    This question already has an answer here: Redirect, Change URLs or Redirect HTTP to HTTPS in Apache - Everything You Ever Wanted How to Know about Mod_Rewrite Rules but Were Afraid to Ask 5 answers I search online for the solution, but nothing :(. I write this simple rule RewriteRule ^(.*)$ http://www.mysite.com/$1 [P,NE,QSA,L] In mysite.it i have an .htaccess with this rule and it's ok, but if i have a link "http://www.mysite.it/public/file name.html" the server point to "http://www.mysite.it/public/file" I try many solution but i can't solve. I try this and many shades of... RewriteRule ^(.*)(%20)(.*)$ "http://www.mysite.com/$1$3" [P,NE,QSA,L] Thanks!

    Read the article

  • How to create iso dvd image for use with Mac OS X's "Disk Utility"?

    - by adolf garlic
    This seemed way easier on my pc. I would just pop a blank dvd in the drive, it asked what I wanted to do with it, to which I would respond, "burn dvd with nero" (paraphrasing), then I would pick "new" and just drag and drop the folders in there. Mac appears to have "Disk Utility" which just requires that I 'choose an image' but then doesn't bother to detail: how to do this what the options mean e.g. format "Mac OS Extended (Journaled)" is that ever going to be readable on a non Mac machine? I want to create an ISO standard DVD as per the 'default' you'd get on nero. All the stuff on the web points to doing things with 'Terminal' (the whole point of buying a Mac was to get away from command line jiggery pokery - I'm trying to burn some photos not land a friggin lunar module here!) Please, if you can just provide some simple instructions on what I need to achive this I'd be extremely grateful. Ta muchley in advance.

    Read the article

  • What to do with a Blowfish Key?

    - by Encoderer
    I just completed backing up 8 years of my Gmail using http://gmvault.org I selected the --encrypt option which uses Blowfish encryption. According to their site: Emails can be encrypted with the option -e --encrypt. With that option, the Blowfish encryption is used to crypt your emails and chats and the first time you activate it, a secret key is randomly generated and stored in $HOME/.gmvault/token.sec. Keep great care of the secret key as if you loose or delete it your stored emails won't be readable anymore !!! I'm using OSX Lion. I'm a software engineer but far from an encryption expert. What should I do with this key? It seems like leaving it where it is now (alongside the emails) sort of misses the point of encrypting them to begin with.

    Read the article

  • Which platform to choose, Java or .NET?

    - by salman
    I am working in a private bank, a leading mid size bank in local market. We are going to create our core banking solution. Existing solution has been developed on Java using IBM Visual Age 4.0. It is very important to discuss architecture first, we have currently more than 350 branches working in standalone mode, and it means they are working in self contained environment. They have their own database server (IBM DB2 9.7) and they are communicating with other branches via sockets to send and receive data. Having experience of .NET for more than 5 years I am trying to convince my superiors to choose .NET platform, but they are reluctant and unwilling. It is my job to encourage them for choosing best available platform to create large scale enterprise application. In simple word, we are going to create a very large scale enterprise financial application, a centralize and integrated which connects all branch networks plus having scalable, solid architecture that easily evolve over time. I want professional people to comment on above scenarios. Which platform to choose .NET or Java? Our all resource is currently working in Java, we have homogeneous environment (no Linux, no Mac and no UNIX). Any idea, any thoughts, any points technical or non-technical i.e. administrative or management point of view will be really appreciated.

    Read the article

  • Programming and Ubiquitous Language (DDD) in a non-English domain

    - by Sandor Drieënhuizen
    I know there are some questions already here that are closely related to this subject but none of them take Ubquitous Language as the starting point so I think that justifies this question. For those who don't know: Ubiquitous Language is the concept of defining a (both spoken and written) language that is equally used across developers and domain experts to avoid inconsistencies and miscommunication due to translation problems and misunderstanding. You will see the same terminology show up in code, conversations between any team member, functional specs and whatnot. So, what I was wondering about is how to deal with Ubiquitous Language in non-English domains. Personally, I strongly favor writing programming code in English completely, including comments but ofcourse excluding constants and resources. However, in a non-English domain, I'm forced to make a decision either to: Write code reflecting the Ubiquitous Language in the natural language of the domain. Translate the Ubiquitous Language to English and stop communicating in the natural language of the domain. Define a table that defines how the Ubiquitous Language translates to English. Here are some of my thoughts based on these options: 1) I have a strong aversion against mixed-language code, that is coding using type/member/variable names etc. that are non-English. Most programming languages 'breathe' English to a large extent and most of the technical literature, design pattern names etc. are in English as well. Therefore, in most cases there's just no way of writing code entirely in a non-English language so you end up with a mixed languages. 2) This will force the domain experts to start thinking and talking in the English equivalent of the UL, something that will probably not come naturally to them and therefore hinders communication significantly. 3) In this case, the developers communicate with the domain experts in their native language while the developers communicate with each other in English and most importantly, they write code using the English translation of the UL. I'm sure I don't want to go for the first option and I think option 3 is much better than option 2. What do you think? Am I missing other options?

    Read the article

  • Basket Analysis with #dax in #powerpivot and #ssas #tabular

    - by Marco Russo (SQLBI)
    A few days ago I published a new article on DAX Patterns web site describing how to implement Basket Analysis in DAX. This topic is a very classical one and is also covered in the many-to-many revolution white paper. It has been also discussed in several blog posts, listed here in historical order: Simple Basket Analysis in DAX by Chris Webb PowerPivot, basket analysis and the hidden many to many by Alberto Ferrari Applied Basket Analysis in Power Pivot using DAX by Gerhard Brueckl As usual, in DAX Patterns we try to present the required DAX formulas in a way that is easy to adapt to specific models. We also try to show a good implementation from a performance point of view. Further optimizations are always possible in DAX. However, in order to keep the model simple to adapt in different scenarios, we avoid presenting optimizations that would require particular assumptions or restrictions on the data model. I hope you will find the Basket Analysis pattern useful. Even if you do not need it today, reading the DAX formula is a good exercise to check your knowledge of evaluation contexts in DAX. For example, describing how does it work the following expression is not a trivial task! [Orders with Both Products] := CALCULATE (     DISTINCTCOUNT ( Sales[SalesOrderNumber] ),     CALCULATETABLE (         SUMMARIZE ( Sales, Sales[SalesOrderNumber] ),         ALL ( Product ),         USERELATIONSHIP ( Sales[ProductCode], 'Filter Product'[Filter ProductCode] )     ) ) The good news is that you can use the patterns even if you do not really understand all the details of the DAX formulas you are using! Any feedback on this new pattern is very welcome.

    Read the article

  • Economical DNS hosting separate from local registrar for country specific TLDs - email or web hosting not required

    - by Eric Nguyen
    Our company owns many country specific top level domains (TLDs; .sg, .my). We will purchase more for other countries in all South East Asia. These domains are associated with our websites hosted on Amazon EC2. The DNS records are currently hosted on a dedicated server that will shut down tomorrow. (The name servers are set to the ones of a web hosting company) Therefore, I will need to host the DNS records somewhere else. Hosting the DNS records with the local registrar costs SGD18 a year per domain in addition to the domain price (which is already very expensive but we have no choice). It would be convenience to host DNS recors for all the country specific TLDs we have using a single service, separate from the local registrars from which we bought the domains. A few searches prompted examples like Amazon Route 53 and dnsmadeeasy.com and the likes. However, since I'm only concern about the country specific TLDs, not .com 1) Is it really economical to host DNS records of all domains in 1 single place as described above? (Have the relevant countries and/or the local registrars done something to keep their monopoly and always charges ridiculous prices for their country specific TLDs?) 2) I would imagine I will need to tell the local registrars to update the name servers to those of the DNS hosting service provider e.g. dnsmadeeasy.com here. Am I correct about how it works here? 3) Will I be able point the TLDs themselves to IP addresses I desire (the EC2 instances where my websites are) or will I only able to do so with the subdomains? 4) Are there any drawbacks that I should know here? Background about our needs: We need the websites associated with the country TLDs to be up and running all the times Also, we'll need to be able to add/edit A and CNAME records We use Google Apps for Business for internal email so I will need to be able to add/edit MX records and TXT records

    Read the article

  • Doesn't boot after installation

    - by jchysk
    Downloaded Ubuntu 12.04.1-alternate-amd64 Installed to USB stick Integrity check fails on ./install/netboot/ubuntu-installer/amd64/pxelinux.cfg/default but that seems to be a known bug where the file isn't included in the alternative 64-bit ISO and shouldn't affect installation. I ignore it and proceed on. For partitioning on 2 SSD Drives: Partition 300MB and 63GB on both RAID1 the 300MB and 63GBs Set the 300MB to EXT4 on /boot Encrypt the rest as MD1 and set it for LVM Create two volumes from MD1: 4GB swap and 59GB to / I go through the installation and get to the point where it says everything is ready and to take the media out so as to boot from the drives I receive the error "Error: No video mode activated." on startup I've read that this can be solved by running "cp /usr/share/grub/*.pf2 /boot/grub" and then updating grub but I can't get to a place where I can actually run this command. In rescue mode I can get to a shell from installer with /boot mounted to /target. So from there I can run "cp /cdrom/boot/grub/font.pf2 /target/grub/" but can't figure out a way to get it to update grub after that or know how what to change in manually updating the grub.cfg file. If I try other devices to mount the root filesystem I get the error "An error occurred while mounting the device you entered for your root file system". It just sits on the video mode error and doesn't progress further. Googling around it seems like people see the error briefly before it continues booting, not getting stuck on it the way I am which leads me to believe that error may be unrelated to Ubuntu not booting. So any ideas as to what I should try next or what needs to be done to install Ubuntu and get it to boot would be helpful.

    Read the article

  • MySQL: "UPDATE command denied to user ''@'localhost'"

    - by Uncle Nerdicus
    For some reason when I installed MySQL on my machine (a Mac running OS X 10.9) the 'root' MySQL account got messed up and I don't have access to it, but I do have access to the standard MySQL account 'sean@localhost' which I use to log into phpMyAdmin. I am trying to reset the 'root' password by starting the mysqld daemon using the command mysqld --skip-grant-tables and then running the following lines in the mysql shell. mysql> UPDATE mysql.user SET Password=PASSWORD('MyNewPass') -> WHERE User='root'; mysql> FLUSH PRIVILEGES; Problem is when I try to run that MySQL string the daemon spits back a ERROR 1142 (42000): UPDATE command denied to user ''@'localhost' for table 'user' as if I didn't use the -u argument when I started the mysql shell, either though I did. Any help is muchly appreciated as I am lost at this point. :/

    Read the article

  • DNS NAmeserver Aname and cname records [closed]

    - by David
    I am inexperienced in the configuration of DNS and have an issue with dominan hosting set up. I have two domains 'www.mydomain1.com' and 'www.mydomain2.com', with mydomain2 pointed at the same place as mydomain1. The domains were passed to me recently by the person who previoulsy controlled them. I have an account with Fasthosts in the UK. When I accepted the domains I could not access the DNS settings and inquired with fasthosts as to why. The reply was: The delegate hosting option for both domains were enabled and this is the reason why you were unable to find the option to edit the advanced DNS records. I have now disabled the delegate hosting option so you can now edit the advanced DNS records for both domains in your account. When I log into the Fasthost control panel now I can access the DNS controls but both domains have no A record or Cname record set up. I am concerned that Fasthosts have blatted the previous Nameserver entries and set me up on theirs but not added any record. 'www.mydomain1.com' currently still works but 'www.mydomain2.com' does not find the site anymore. I am worried I will lose mydomain1 to as the DNS changes filter through the system. my webhosting is at 'xxx.xxx.xxx.xxx/mydomain1.com/' and this is where I want both domains to point. Any advice would be much appreciated. One thing which is confusing me is that because I am on a shared server I have to put 'xxx.xxx.xxx.xxx/mydomain1.com/' to get to my site rather than just 'xxx.xxx.xxx.xxx'. The form on Fasthosts for the A name record only allows an IP to be entered - does it add the mydomain1.com/ onto the end itself? Thanks for any help given - I'm quite worried about this David

    Read the article

  • Ubuntu and racadm

    - by lmqcn
    I recently purchased a used poweredge 1850 server and it came with a DRAC card. After wiping the HDD and installing ubuntu server 12.04.3 LTS amd64 on it, I am now trying to gain access to the DRAC which I believe is version 4. I have properly configured the DRAC to use it's own IP on my LAN and when I point my browser to the IP address, I am greeted with the DRAC login page (it has the dell logo and everything). However, after trying the credentials of root/calvin, I was denied access. So I think that the previous owners had set their own password. After doing some reading, it appears that I can reset the credentials to the default using racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword but upon entering the command, I get this error: bash: /usr/sbin/racadm: No such file or directory This holds true even if I run sudo su prior to running the racadm command. If, however, I run sudo racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword there are no errors. Yet, when I try to log into the DRAC via the web interface using the credentials of root/newpassword I am still not granted access. I installed the dell utilities via the guide at https://wiki.ubuntu.com/HardwareSupportMachinesServersDellNotes. I first tried to install the 64 bit version that is on the dell repositories, but after that was unsuccessful, I just followed the guide verbatim. No errors were produced in either case. I even followed the information at the bottom of the guide by executing sudo pppd /dev/ttyS1 1382400 crtscts noipdefault noauth lock persist connect 'chat -v "" CLIENT CLIENTSERVER "\\c"' but obviously, replacing the /dev/ttyS1 with the correct information for my system. ls -l /usr/sbin/ | grep racadm yields -rwxr-xr-x 1 root root 87930 Sep 16 04:03 racadm I have tried these credentials after each attempt of changing the password: root/calvin root/newpassword admin/calvin admin/newpassword All have been unsuccessful. What is the next course of action that I should take?

    Read the article

  • Can Linux works as 802.1X authenticator in bridge mode ?

    - by Kartoch
    I want to create a lab for my students with netkit (a network emulator based on 802.1X) to study 802.1X. I can create the authentication server (FreeRadius) and configure the client with XSupplicant connected by a switch (a Linux in bridge mode). I'm looking to a way to configure the switch as Authenticator, i.e.: when the client is connected, the switch only forwards EAP packets to the authentication server. then the switch lets the user access to the local network when the authentication server authorizes the client. At the present time there is a lot of documentation to do this with a wireless point but no one for a switch. Does anyone have an idea or know the good software for it ?

    Read the article

  • Framework 4 Features: Login Id Support

    - by Anthony Shorten
    Given that Oracle Utilities Application Framework 4 is available as part of Mobile Work Force Management and other product progressively I am preparing a number of short but sweet blog entries highlighting some of the new functionality that has been implemented. This is the first entry and it is on a new security feature called Login Id. In past releases of the Oracle Utilities Application Framework, the userid used for authentication and authorization was limited to eight (8) characters in length. This mirrored what the market required in the past with LAN userids and even legacy userids being that length. The technology market has since progressed to longer userid lengths. It is very common to hear that email addresses are being used as credentials for production systems. To achieve this in past versions of the Oracle Utilities Application Framework, sites had to introduce a short userid (8 characters in length) as an alias in your preferred security store. You then configured your J2EE Web Application Server to use the alias as credentials. This sometimes was a standard feaure of the security store and/or the J2EE Web Application Server, if you were lucky. If not, some java code has to be written to implement the solution. In Oracle Utilities Application Framework 4 we introduced a new attribute on the user object called Login Id. The Login Id can be up to 256 characters in length and is an alternative to the existing userid stored on the user object. This means the Oracle Utilities Application Framework can support both long and short userids. For backward compatibility we use the Login Id for authentication but the short userid for authorization and auditing. The user object within the Oracle Utilities Application Framework holds the translation. Backward compatibility is always a consideration in any of our designs for future or changed functionality. You will see reference to this fact in the blog entries I will be composing over the next few months. We have also thought about the flexibility in implementing this feature. The Login Id can be the same value of the Userid (the default for backward compatibility) or can be different. Both the Login Id and Userid have to be unique. This avoids sharing of credentials and is also backward compatible. You can manually enter the Login Id or provision it from Oracle Identity Manager (or other tool). If you use the Login Id only, then we will not autogenerate a short userid automatically as the rules for this can vary from site to site. You have a number of options there. Most Identity provisioning tools can generate a short userid at user creation time and this can be used. If you do not use provisioning tools, then you can write a class extension using the SDK to autoegenerate the userid based upon your sites preference. When we designed the feature there were lots of styles of generating userids (random, initial and surname, numbers etc). We could not really see a clear winner in that respect so we just allowed the extension to be inserted in if necessary. Most customers indicated to us that identity provisioning was the preferred way. This is why we released an Oracle Identity Manager integration with the framework. The Login id is case sensitive now which was not supported under userid. The introduction of the Login Id allows the product to offer flexible options when configuring security whilst maintaining backward compatibility.

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • Keyboard lights sometimes stay on after PC is powered off. Why?

    - by Vilx-
    Sometimes when I power off my PC some lights on the keyboard stay on. It's pretty random which ones, though I suspect it has something to do with what lights were on just before the shutdown (if I vigorously follow that they are all off while the PC is shutting down then they stay off). Not that this would change anything, but it is annoying. Why does this happen and how can I avoid it? Added: I know that I can turn off the power completely to my computer and it will go off. That's not the point. Actually, I even want my keyboard to have power, because I use it to turn on my computer (the power button on the case is difficult to reach). I just want the lights to go off. Or at least understand why this happens. Oh, and it's a PS2 keyboard.

    Read the article

  • Snow Leopard: Optimization

    - by Shyam
    Hi, I have bunch of questions: I have a Mac network, which has five Mac's. Right now, they are individually getting software updates. Is there a way to download the patches/security updates in a single place (repository) and point all machines to this location? Personally, I have tools like Monolingual and Onyx, but are there tools you could recommend that affect the performance of the Operating System positively? Tweaks would be nice. Links and pointers, would be really appreciated. I've read about Time machine, is there a way to backup all machines to a network drive using this tool? Thanks!

    Read the article

  • Site migration and SEO impact

    - by John Smith
    I'd greatly appreciate a response on the following question relating to site migration and SEO impact. Here's some background on how my domain name and site is currently configured: My domain name provider has the following settings: host name @ is an A NAME record and points to IP address x.x.x.x host name www is an A NAME record and points to IP address x.x.x.x sub-domain host name new.example.com is an A NAME record and points to IP address x.x.x.x My hosting provider has the following settings: host record @ is an A NAME record and points to IP address x.x.x.x, folder home/public_html/old host record www is a C NAME record and points to example.com sub-domain host record new.example.com points to home/public_html/new I want to: point the domain (example.com AND www.example.com) to the content hosted under folder home/public_html/new, which is currently the content directory for new.example.com retire the content hosted under folder home/public_html/old retire the sub-domain host record new.example.com I believe the easiest method of doing this, is: removing the sub-domain host record new.example.com; and changing the following line in the .htaccess file in home/public_html from # Change 'subdirectory' to be the directory you will use for your main domain. RewriteCond %{REQUEST_URI} !^/old/ to # Change 'subdirectory' to be the directory you will use for your main domain. RewriteCond %{REQUEST_URI} !^/new/ But I don't understand how this will impact my SERP - ideally, I'd like it to remain the same. Research on this topic resulted in the following Google page, which was no help, and this related StackExchange question, which suggests that this should not affect my SERP (at least, not permanently). But I wanted to make certain with a more specific example, and hopefully contribute to the community at the same time. I'd appreciate any feedback on this. Is there a better/recommended method to migrate sites this way? Is there an SEO impact?

    Read the article

  • How do I convince my team that a requirements specification is unnecessary if we adopt user-stories?

    - by Nupul
    We are planning to adopt user-stories to capture stakeholder 'intent' in a lightweight fashion rather than a heavy SRS (software requirements specifications). However, it seems that though they understand the value of stories, there is still a desire to 'convert' the stories into an SRS-like language with all the attributes, priorities, input, outputs, source, destination etc. User-stories 'eliminate' the need for a formal SRS like artifact to begin with so what's the point in having an SRS? How should I convince my team (who are all very qualified CS folks by the way - both by education and practice) that the SRS would be 'eliminated' if we adopted user-stories for capturing the functional requirements of the system? (NFRs etc can be captured too, but that's not the intent of the question). So here's my 'work-flow' argument: Capture initial requirements as user-stories and later elaborate them to use-cases (which are required to be documented at a low level i.e. describing interactions with the UI prototypes/mockups and are a deliverable post deployment). Thus going from user-stories to use-cases rather than user-stories to SRS to use-cases. How are you all currently capturing user-stories at your workplace (if at all) and how do you suggest I 'make a case' for absence of SRS in presence of user-stories?

    Read the article

  • How do I Series: Connecting an Expression Blend Project to Team Foundation Server

    - by Enrique Lima
    I have heard of people wanting and needing to add projects created in Expression Blend to Team Foundation Server. Here is the recipe: 1) Create your project in Expression Blend … click OK 2) Select the option to open your recently created project in Visual Studio. Once that option is selected, your solution will open up in Visual Studio, close Expression Blend at this point. Now, I want to add this project to Source Control … Next, I connect to my TFS environment, and pick the location to save my project Once the project is added, I will get a status window of pending changes for my project, all that we are left to do is to check in those changes. Since we have checked in our project, we can now close Visual Studio, and we will proceed to open Expression Blend again. And select our project we will! We notice some differences from before, just by opening it What differences you say?!? Notice the lock to the right of the item name … And we also get this when we right click … And there we have it, it is a combination of tools to achieve this, but it is well worth it.

    Read the article

  • How does one extract the Canon ip4600 drivers in inf form?

    - by Trix
    I have a x64 machine which needs "additional drivers" for x86 clients. The problem I have is I cannot get the proper .inf files so I can use them. Canon only provides an .exe file, which when extracted has other .exe files and some dll files. I believe Canon changed their drivers, because I did have them extracted at one point, but sadly did not save them :(. The drivers are available here. Hopefully I am clear in what I am trying to do. Canon provides the .exe file. The .exe file when extracted does not produce any .inf driver files. Thanks!

    Read the article

  • How to manage a developer who has poor communication skills

    - by djcredo
    I manage a small team of developers on an application which is in the mid-point of its lifecycle, within a big firm. This unfortunately means there is commonly a 30/70 split of Programming tasks to "other technical work". This work includes: Working with DBA / Unix / Network / Loadbalancer teams on various tasks Placing & managing orders for hardware or infrastructure in different regions Running tests that have not yet been migrated to CI Analysis Support / Investigation Its fair to say that the Developers would all prefer to be coding, rather than doing these more mundane tasks, so I try to hand out the fun programming jobs evenly amongst the team. Most of the team was hired because, though they may not have the elite programming skills to write their own compiler / game engine / high-frequency trading system etc., they are good communicators who "can get stuff done", work with other teams, and somewhat navigate the complex beaurocracy here. They are good developers, but they are also good all-round technical staff. However, one member of the team probably has above-average coding skills, but below-average communication skills. Traditionally, the previous Development Manager tended to give him the Programming tasks and not the more mundane tasks listed above. However, I don't feel that this is fair to the rest of the team, who have shown an aptitute for developing a well-rounded skillset that is commonly required in a big-business IT department. What should I do in this situation? If I continue to give him more programming work, I know that it will be done faster (and conversly, I would expect him to complete the other work slower). But it goes against my principles, and promotes the idea that you can carve out a "comfortable niche" for yourself simply by being bad at the tasks you don't like.

    Read the article

  • Troubles setting up my new T1

    - by timmaah
    I'm more of a web developer kind of guy with limited knowledge of networks, so if anyone can point in the right direction, I would be grateful. I am replacing my satellite connection with a T1 I got for a good deal thru the phone company. I also managed to get my hands on a Netvanta 3200 router. My problem is I can't quite figure out how to set up the router and can't find any kind of guide that would explain what I need to set where. I'm not sure what to do next on my troubleshooting journey.

    Read the article

  • DNS records on website.. What are they for?

    - by Blake Nic
    Recently we had to get some ddos protection for our website because of the large attacks we were seeing after getting a bit of popularity. We handed over our domain and hosting information to our ddos protection provider. It worked perfectly but I have a question. On our DNS records we have the Host and Answer and Type. The Host has our domain name there. The answer is this: SOMETEXTXXXX.dv.googlehosted.com. And when i copy and paste it into my browser it gives me a 404 error. But our website still loads and functions as it should. I don't understand why it would need this? I asked them about this and they said it is a method for ddos protection and the other IPs are the reverse proxy (the other ips give a 404 error too). Can anyone expand on this more please. How does all this tie in together and make the internet browser know where to point the person with all these reverse proxies and stuff I don't understand. Thank you. Here is an image for reference: http://i.stack.imgur.com/qo5QO.png

    Read the article

  • Can't change Hyper Terminal Hardware Settings

    - by Tim
    Anyone here any good with Hyperterminal? Am having a nightmare trying to use it at work to update our telephone extensions. The company who supply our PABX box have told me that XP does strange things to Hyperterminal and that I should use Win2k. Which I did with the same result. I have narrowed the problem dowwn to the Hardware Settings for the connection in Hyper Terminal. No matter what I set (and I need 7E1) it defaults back to 8N1. Has anyone seen this behaviour before and know of a simple way around it? (Apart from buying a more expensive commercial version of Hyper Terminal, as suggested by our support people). Edit: I should point out I have to connect via a phone line so a direct serial connection is not an option. Cheers Tim

    Read the article

< Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >