Search Results

Search found 29638 results on 1186 pages for 'phone number'.

Page 749/1186 | < Previous Page | 745 746 747 748 749 750 751 752 753 754 755 756  | Next Page >

  • Oracle Secure Global Desktop (SGD) 5.1

    - by wcoekaer
    Last week, we released the latest update of Oracle Secure Global Desktop. Release 5.1 introduces a number of bug fixes and smaller changes but the most interesting one is definitely increased support for html5-based client access. In SGD 5.0 we added support for Apple iPads using Safari to connect to SGD and display your session right inside the browser. The traditional model for SGD is that you connect using a webbrowser to the webtop and applications that are displayed locally using a local client (tta). This client gets installed the first time you connect. So in the traditional model (which works very well...) you need a webbrowser, java and the tta client. With the addition of html5 support, there's no longer a need to install a local client, in fact, there is also no longer a need to have java installed. We currently support Chrome as a browser to enable html5 clients. This allows us to enable html5 on the android devices and also on desktops running Chrome (Windows, MacOS X, Linux). Connections will work transparently across proxy servers as well. So now you can run any SGD published app or desktop right from your webbrowser inside a browser window. This is very convenient and cool.

    Read the article

  • Amazon EC2 Sign In

    - by Barry
    When I change the home directory of my Amazon EC2 instance from /home/ubuntu to /home/ubuntu/folder in the /etc/passwd file, I am no longer able to access the instance using my existing keypair. Once I switch it back to the original directory I have no problems and can log into my instance as normal. I have checked the permissions on the new folder and they are drwxr-xr-x, which is the same the /home/ubuntu folder. I have a number of instances running at the minute and because of this change I have no way of logging back into them to rectify the situation. Does anyone have an idea what is going on? Thanks in advance

    Read the article

  • Using the promote builds plugin to tag subversion repository in jenkins

    - by mark
    We have a task which builds based on data from 4 different SVN repositories. I want to allow QA promote a build, so that the revisions participating in the build are tagged with the build number and some optional label. I have encountered the following problem - the promoted build may not be the most recent build. How do I know the SVN revision of each of the four repositories used during that build? I know that each build has this information in the revision.txt and build.xml files associated with the build, but how does it become available in the context of promotion? Thanks. P.S. Asked here before, but did not get a satisfying answer.

    Read the article

  • Database implementation question?

    - by gundam
    consider a disk with a sector size of 512 bytes, 2000 tracks/surface, 50 sectors/track, 5 doubled sided platters, average seek time is 10 msec. Assume a block size of 1024-byte is selected. Assume a file that contains 100,000 records of 100-byte each is to be stored on the disk, and NONE of the reocd can be spanned 2 blocks. How many blocks are needed to store the entire file?? If the file is arranged sequentially on disk, how many surfaces are required?? Now, i have calculated that 10,000 blocks are needed to store 100,000 records. But i am not sure how to find out the answer of the surfaces required. I only calculated the capacity of track is 25KB and capacity of surface is 50,000 KB But I don't know how to calculate the number of surfaces... Could anyone help me how to get the answer? Thanks a lot!!

    Read the article

  • IIS7 is gzipping files but not serving the gzipped version.

    - by ptrin
    By following a number of helpful blog posts I have configured IIS to gzip my static files. I have even enabled Failed Request Tracing and filtered to the 200 status code, and I can see the successful compression events taking place as well as the finished headers, which look like this: Headers="Content-Type: text/css Content-Encoding: gzip Last-Modified: Mon, 04 Oct 2010 17:35:08 GMT Accept-Ranges: bytes ETag: "02ef37cea63cb1:0" Vary: Accept-Encoding Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET " However, when I test in Fiddler and Firefox the Content-Encoding header is missing, and the file is not gzipped. This is a similar issue to this question which was never resolved. IIS is generating the gzipped files which I can see in C:\inetpub\temp\IIS Temporary Compressed Files . Does anyone know how I can troubleshoot this?

    Read the article

  • Ubuntu Linux -- create custom burnable/bootable DVD image?

    - by ashgromnies
    I recently developed some kiosk software that runs on Ubuntu Linux, and my client needs me to set up ten more computers with the complete software package(and that number will only grow in the future). So I'm looking for a way to make this less of a pain in the neck and prevent me from shooting myself in the foot -- I had to disable some things on the installations of the operating systems like screensavers, automatic updates, etc. that would pop up and disrupt the kiosk operation. I don't feel comfortable doing that by hand across 10 computers, it seems stupid. Does anybody have recommendations for software that would let me burn an installable DVD with a complete image of the hard drive from one of the devices? I've looked at Clonezilla, G4L, and PartImage and I'm still not quite sure if any of them offer what I need. I know PartImage for sure won't work, because it doesn't support Ext4.

    Read the article

  • powershell indentation

    - by Steve B
    I'm writing a large script that deploys an application. This script is based on several nested functions call. Is there any way to "ident" the output based on the depth ? For example, I have : function myFn() { Write-Host "Start of myfn" myFnNested() Write-Host "End of myfn" } function myFnNested() { Write-Host "Start of myFnNested" Write-Host "End of myFnNested" } Write-Host "Start of myscript" Write-Host "End of myscript" The output of the script will be : Start of myscript Start of myfn Start of myfnNested End of myFnNested End of myFn End of myscript What I want to achieve is this output : Start of myscript Start of myfn Start of myfnNested End of myFnNested End of myFn End of myscript As I don't want to hardly code the number of spaces (since I does not know the depth level in complex script), how can I simply reach my goal ? Maybe something like this ? function myFn() { Indent() Write-Host "Start of myfn" myFnNested() Write-Host "End of myfn" UnIndent() } function myFnNested() { Indent() Write-Host "Start of myFnNested" Write-Host "End of myFnNested" UnIndent() } Write-Host "Start of myscript" Write-Host "End of myscript"

    Read the article

  • How to turn on/turn off leds by terminal?

    - by GarouDan
    I would like turn on/turn off some of my leds running a command on linux. I use Ubuntu 12.04 LTS. I tried xset led named "Scroll Lock" xset led named "Num Lock" xset led 2 (this is the number of Scroll Lock as `xset q` says) xset led 1 but nothing works. Tried setleds +num setleds +scroll but I got a error message saying Error reading the current settings of flags. Maybe you're not on the console? (I was in a terminal). So, how can I perform this?

    Read the article

  • DNS problems : correct nameserver, namserver working, but not resolving

    - by user1719624
    My problem is as follows. Any suggestions are welcome. [domain].org is not resolving whois and checking the registry information shows that the correct nameserver is set. The primary nameserver is also the server on which domain.org is hosted. The primary nameserver is also used for a number of other domains, and is working fine for those. Logging into the server, I can ping [domain].org and it resolves correctly. Setting the nameserver as my own DNS server on my laptop, and the URL resolves correctly. If the domain has the correct nameserver set, and the nameserver can resolve the URL to the correct IP address, and if I use the nameserver as my DNS then it resolves correctly, AND the nameserver is used for other domains which are resolving correctly, then why isn't it working? NB : this is a new domain registration and has been set up for around 10 days now, so it's not simple slow propagation. Any ideas? thanks

    Read the article

  • Exchange 2007 automatically adding IP to block list

    - by Tim Anderson
    This puzzled me. We have all mail directed to an ISP's spam filter, then delivered to SBS 2008 Exchange. One of the ISP's IP numbers suddenly appeared in the ES2007 block list, set to expire in 24 hours I think, so emails started bouncing. Quick look through the typically ponderous docs, and I can't see anything that says Exchange will auto-block an IP number, but nobody is admitting to adding it manually and I think it must have done. Anyone know about this or where it is configured? Obviously one could disable block lists completely but I'd like to know exactly why this happened.

    Read the article

  • HP Proliant DL160 G6 - Hardware RAID card to get? [closed]

    - by zhuanyi
    I have bought a DL160 G6 server (Product number: 490427R-001 ) and it does not come with a hardware RAID card. I am trying to set up a VMWare Esxi server and as such I would need a hardware RAID card. I am just wondering if there is any card that would fit into the chassis? Would a P200i fits? How about a P400? Also, would there be any non-HP RAID card that will do the magic too? I have 4 SATA 160GB hard drives already plugged in. Thanks a lot!

    Read the article

  • HP DL380 G5 Predictive failure of a new drive

    - by CharlieJ
    Consolidated Error Report: Controller: Smart Array P400 in slot 3 Device: Physical Drive 1I:1:1 Message: Predictive failure. We have an HP DL380 G5 server with two 72GB 15k SAS drives configured in RAID1. A couple weeks ago, the server reported a drive failure on Drive 1. We replaced the drive with a brand new HDD -- same spares number. A few days ago, the server started reporting a predictive drive failure on the new drive, in the same bay. Is it likely the new drive is bad... or more likely we have a bay failure problem? This is a production server, so any advice would be appreciated. I have another spare drive, so I can hot swap it if this is a fluke and new drive is just bad. THANKS! CharlieJ

    Read the article

  • scsi and ata entries for same hard drive under /dev/disk/by-id

    - by John Dibling
    I am trying to set up a ZFS pool using 4 bare drives which I have attached to my Ubuntu system via a SATA hot swap backplane. These are Hitachi SATA drives. When I list the contents of /dev/disk/by-id, I see two entries for each drive: root@scorpius:/dev/disk/by-id# ls | grep Hitachi ata-Hitachi_HDS5C3030ALA630_MJ1323YNG0ZJ7C ata-Hitachi_HDS5C3030ALA630_MJ1323YNG1064C ata-Hitachi_HDS5C3030ALA630_MJ1323YNG190AC ata-Hitachi_HDS5C3030ALA630_MJ1323YNG1DGPC scsi-SATA_Hitachi_HDS5C30_MJ1323YNG0ZJ7C scsi-SATA_Hitachi_HDS5C30_MJ1323YNG1064C scsi-SATA_Hitachi_HDS5C30_MJ1323YNG190AC scsi-SATA_Hitachi_HDS5C30_MJ1323YNG1DGPC I know these are the same drives because I wrote down the serial numbers, and all the other drives in this system are either Seagate or WD. The serial number for the first one, for example, is YNG0ZJ7C. Why are there two entries here for each drive? More to the point, when I create my ZFS pool which one should I use; the scsi- one or the ata- one?

    Read the article

  • Should I use a seperate class per test?

    - by user460667
    Taking the following simple method, how would you suggest I write a unit test for it (I am using MSTest however concepts are similar in other tools). public void MyMethod(MyObject myObj, bool validInput) { if(!validInput) { // Do nothing } else { // Update the object myObj.CurrentDateTime = DateTime.Now; myObj.Name = "Hello World"; } } If I try and follow the rule of one assert per test, my logic would be that I should have a Class Initialise method which executes the method and then individual tests which check each property on myobj. public class void MyTest { MyObj myObj; [TestInitialize] public void MyTestInitialize() { this.myObj = new MyObj(); MyMethod(myObj, true); } [TestMethod] public void IsValidName() { Assert.AreEqual("Hello World", this.myObj.Name); } [TestMethod] public void IsDateNotNull() { Assert.IsNotNull(this.myObj.CurrentDateTime); } } Where I am confused is around the TestInitialize. If I execute the method under TestInitialize, I would need seperate classes per variation of parameter inputs. Is this correct? This would leave me with a huge number of files in my project (unless I have multiple classes per file). Thanks

    Read the article

  • Repository query conditions, dependencies and DRY

    - by vFragosop
    To keep it simple, let's suppose an application which has Accounts and Users. Each account may have any number of users. There's also 3 consumers of UserRepository: An admin interface which may list all users Public front-end which may list all users An account authenticated API which should only list it's own users Assuming UserRepository is something like this: class UsersRepository extends DatabaseAbstraction { private function query() { return $this->database()->select('users.*'); } public function getAll() { return $this->query()->exec(); } // IMPORTANT: // Tons of other methods for searching, filtering, // joining of other tables, ordering and such... } Keeping in mind the comment above, and the necessity to abstract user querying conditions, How should I handle querying of users filtering by account_id? I can picture three possible roads: 1. Should I create an AccountUsersRepository? class AccountUsersRepository extends UserRepository { public function __construct(Account $account) { $this->account = $account; } private function query() { return parent::query() ->where('account_id', '=', $this->account->id); } } This has the advantage of reducing the duplication of UsersRepository methods, but doesn't quite fit into anything I've read about DDD so far (I'm rookie by the way) 2. Should I put it as a method on AccountsRepository? class AccountsRepository extends DatabaseAbstraction { public function getAccountUsers(Account $account) { return $this->database() ->select('users.*') ->where('account_id', '=', $account->id) ->exec(); } } This requires the duplication of all UserRepository methods and may need another UserQuery layer, that implements those querying logic on chainable way. 3. Should I query UserRepository from within my account entity? class Account extends Entity { public function getUsers() { return UserRepository::findByAccountId($this->id); } } This feels more like an aggregate root for me, but introduces dependency of UserRepository on Account entity, which may violate a few principles. 4. Or am I missing the point completely? Maybe there's an even better solution? Footnotes: Besides permissions being a Service concern, in my understanding, they shouldn't implement SQL query but leave that to repositories since those may not even be SQL driven.

    Read the article

  • Why does the player fall down when in between platforms? Tile based platformer

    - by inzombiak
    I've been working on a 2D platformer and have gotten the collision working, except for one tiny problem. My games a tile based platformer and whenever the player is in between two tiles, he falls down. Here is my code, it's fire off using an ENTER_FRAME event. It's only for collision from the bottom for now. var i:int; var j:int; var platform:Platform; var playerX:int = player.x/20; var playerY:int = player.y/20; var xLoopStart:int = (player.x - player.width)/20; var yLoopStart:int = (player.y - player.height)/20; var xLoopEnd:int = (player.x + player.width)/20; var yLoopEnd:int = (player.y + player.height)/20; var vy:Number = player.vy/20; var hitDirection:String; for(i = yLoopStart; i <= yLoopEnd; i++) { for(j = xLoopStart; j <= xLoopStart; j++) { if(platforms[i*36 + j] != null && platforms[i*36 + j] != 0) { platform = platforms[i*36 + j]; if(player.hitTestObject(platform) && i >= playerY) { hitDirection = "bottom"; } } } } This isn't the final version, going to replace hitTest with something more reliable , but this is an interesting problem and I'd like to know whats happening. Is my code just slow? Would firing off the code with a TIMER event fix it? Any information would be great.

    Read the article

  • Approach to Authenticate Clients to TCP Server

    - by dab
    I'm writing a Server/Client application where clients will connect to the server. What I want to do, is make sure that the client connecting to the server is actually using my protocol and I can "trust" the data being sent from the client to the server. What I thought about doing is creating a sort of hash on the client's machine that follows a particular algorithm. What I did in a previous version was took their IP address, the client version, and a few other attributes of the client and sent it as a calculated hash to the server, who then took their IP, and the version of the protocol the client claimed to be using, and calculated that number to see if they matched. This works ok until you get clients that connect from within a router environment where their internal IP is different from their external IP. My fix for this was to pass the client's internal IP used to calculate this hash with the authentication protocol. My fear is this approach is not secure enough. Since I'm passing the data used to create the "auth hash". Here's an example of what I'm talking about: Client IP: 192.168.1.10, Version: 2.4.5.2 hash = 2*4*5*1 * (1+9+2) * (1+6+8) * (1) * (1+0) Client Connects to Server client sends: auth hash ip version Server calculates that info, and accepts or denies the hash. Before I go and come up with another algorithm to prove a client can provide data a server (or use this existing algorithm), I was wondering if there are any existing, proven, and secure systems out there for generating a hash that both sides can generate with general knowledge. The server won't know about the client until the very first connection is established. The protocol's intent is to manage a network of clients who will be contributing data to the server periodically. New clients will be added simply by connecting the client to the server and "registering" with the server. So a client connects to the server for the first time, and registers their info (mac address or some other kind of unique computer identifier), then when they connect again, the server will recognize that client as a previous person and associate them with their data in the database.

    Read the article

  • CPU/RAM usage log over a period of time to file on CentOS

    - by joel_gil
    Hi everyone Im looking for an app pr line of code that could let me observe a process, save the info in a number of variable and then put the gathered info on a file. Ive been trying with variations of top but no luck. I am running several CentOS virtual servers, VM is 2gb ram 2 processor. Maybe a script that works over a specified amount of time while writing lines with the info on a text file so at the end i can have a sort of table with the data. The thing is Im going to stress test the server and I would like to have the data to make some statistics. Any comments and suggestions are most welcome.

    Read the article

  • Dual boot problem with ubuntu 12.04 and Vista

    - by vendella dahlahdoo
    Greetings from New Zealand. I have installed Microsoft Windows Vista and then installed Ubuntu 12.04 on my refurbished Compaq nx8220 laptop. I get the following infamous head hurting prompt continually. error: no such partition. grub rescue> Have tried most of the common recommended solutions. Live-CD then install Boot-Repair through the Terminal didn't work. It repaired all the linux stuff when restoring grub and then can't boot into Windows Vista. When I use Boot-Repair to fix the MBR, then I can't boot into Ubuntu. Tried installing BCD 2.1 in Vista and tried all the options one after another in BCD. Still no Ubuntu when selected through the options menu from BCD on restart/reboot. I have tried the boot repair option on the Ubuntu server CD-ROM, tried installing earlier versions of Ubuntu 11.04, 11.10, and Ubuntu server 11.10 and 12.04. Still the same result. I tried deleting the Ubuntu partitions through Vista a number of times and reinstalling Ubuntu. I have been trying and retrying all the options in Boot-Repair in different combinations for the past week and a half. I have tried at least 10 times installing and reinstalling Ubuntu. I really love Ubuntu and believe I have exhausted most of the recommended solutions and have spent too much time on this. Its driving me nuts!! please can someone help, I have finally given up (sigh). The following are some outputs from Boot-Repair from my last attempts. http://paste.ubuntu.com/1019227 http://paste.ubuntu.com/1019264 I was only allowed to post two links being a newbie. The only thing left for me to do is the flying Samoan dropkick laptop trick. Thanks in advance. Francis.

    Read the article

  • Case studies for successful service (project) based software development businesses without constant overtime from its employees [closed]

    - by Ryan Taylor
    I work for an IT company that is primarily services (project) based rather than product based. All software engineers are salaried. The company has set new expectations that everyone should work 48 hours per week instead of 40. Note, this isn't occasional overtime due to crunches. This is the new 40. The reasoning is that this enables the company to provide benefits to its employees such as monetary incentives and training because the company is more profitable. more hours worked = more billable hours = larger profit I understand the need for profitability and the occasional crunch time and have put in the extra hours when it was needed and beneficial to the project. However, I am also very sensitive to work life balance and have raised my concerns about the the new expectation. My employer is open to other methods to increase profitability so I hold hope that we can turn things around before it becomes a horrible place to work. How does a services based company become more profitable without increasing the number of hours expected from it's salaried employees? Are there any case studies showing the pros and cons of consistent overtime? Are there any case studies for a successful service based business model (for software development companies) that does not require consistent overtime from its employees?

    Read the article

  • Writing generic code when your target is a C compiler

    - by enobayram
    I need to write some algorithms for a PIC micro controller. AFAIK, the official tools support either assembler or a subset of C. My goal is to write the algorithms in a generic and reusable way without losing any runtime or memory performance. And if possible, I would like to do this without increasing the development time much and compromising the readability and maintainability much either. What I mean by generic and reusable is that I don't want to commit to types, array sizes, number of bits in a bit field etc. All these specifications, IMHO, point to C++ templates, but there's no compiler for it for my target. C macro metaprogramming is another option, but, again my opinion, that greatly reduces readability and increases development time. I believe what I'm looking for is a decent C++ to C translator, but I'd like to hear anything else that satisfies the above requirements. Maybe a translator from another high-level language to C that produces very efficient code, maybe something else. Please note that I have nothing against C, I just wish templates were available in it.

    Read the article

  • How to check the OS is running on bare metal and not in virtualized environment created by BIOS?

    - by Arkadi Shishlov
    Is there any software available as a Linux, *BSD, or Windows program or boot-image to check (or guess with good probability) the environment an operating system is loaded onto is genuine bare metal and not already virtualized? Given recent information from various sources, including supposed to be E.Snowden leaks, I'm curious about the security of my PC-s, even about those that don't have on-board BMC. How it could be possible and why? See for example Blue Pill, and a number of papers. With a little assistance from network card firmware, which is also loadable on popular card models, such hypervisor could easily spy on me resulting in PGP, Tor, etc. exercises futile.

    Read the article

  • Is it necessary to change the default users and groups in VMware esxi 4.0 in order to have a secure

    - by Teevus
    By default esxi creates a number of users and groups including: daemon nfsnobody root nobody vimuser dcui How secure is this default security setup? Besides changing the root password, is it advisable to modify the default users and groups? E.g. does esxi use default passwords for the accounts or anything else that could be exploited by malicious users? My scenario is very basic and I don't require any custom users or groups as only sysadmins will ever need to administer the virtual infrastructure, and they can do so using the root account. Thanks

    Read the article

  • How to Downgrade Razor 3 and fix the issue that CSHTML not work in VS10,12 ?

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/how-to-downgrade-razor-3-and-fix-the-issue-that.aspxFew days ago I migrate a project to MVC 4 and suddenly I have seen that MVC project’s cshtml file is no longer working. The problem happen because my project is now based on Razor 3 RC and VS12 doesn’t even have support it. (Remember that VS team will ship support in VS update 4). My migration update it to Razor 3 (which is not related to MVC 4, MVC 4 used old version of Razor 2).   So how to fix the problem. Since VS update 4 in development and MVC 3 support exist in both old Version of VS (10,12) then better to migrate back our Razor to old version so we can use our project in VS 10 or 12. If your project have Razor 3 and it seem that Syntax highlighting doesn’t work for you then I suggest you to try this Nuget package https://www.nuget.org/packages/UpgradeMvc3ToMvc4 Remember that this will not be succeed. What you need to do is delete package folder in your project and now open the packages.config remove all entry of package now.   Now Run this command PM> Install-Package UpgradeMvc3ToMvc4 If this is failed then see what thing make error in console. simply remove the reference and try again. Now run it and see this will work.   After run this you will see that WebGrease Dll have a version number issue. Simply update it to version 1.5.2 and now you have ready your project to run it in .net 4. If you do bin deployment then you don’t need to have installed MVC 4 on server either. Remember that MVC 5 is based on .net 4.5 which simply means you can’t run it in VS10. until VS12 update 4 MVC 5 cshtml page will be work as simple html pages (syntax highlighting and intellisense). Thanks for read my post

    Read the article

  • windows print service

    - by user1631171
    Hi my college has a HP color printer that can be used to print both A3 and A4 size color printouts. It is connected to a windows 2008 print server. The windows event viewer provides the status of printouts using event id 307. I would like to know if it is possible to find out if an A3 page was printed or A4 page was printed. Is this information also logged in windows event viewer. Or is there any other event id that captures this information. Also the number of copies printed should be captured in event id 805. I have read in some forums that sometimes the value is wrong. Please provide me some information on this. Thanks in advance.

    Read the article

< Previous Page | 745 746 747 748 749 750 751 752 753 754 755 756  | Next Page >