Search Results

Search found 21666 results on 867 pages for 'business objects'.

Page 290/867 | < Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >

  • Recommendations on laptops - long battery life, slim, Windows 7

    - by molecule
    Hi, I have been looking at a couple of laptops... The most important requirement is long battery life, slim & light, and runs Windows 7. One that seems to fit the bill is the Sony Vaio Z series laptops. They claim to have long battery life, have some really impressive specs (on paper) and a very nice design. They are not on sale where I am yet but has seen/used them? Second would be the Lenovo Thinkpad IBM x301 For business users, Thinkpad seems to be the way to go. I have also considered Dell Alienware M11x but may not be completely appropriate for business use? What would you recommend with a budget of USD1500-2000?

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • How to transition to Comcast with static IP address

    - by steveha
    I have my own email server in my house, on a static IP address. I have had business DSL for over a decade, but I also now have Comcast business Internet. I want to transition from the DSL to the Comcast, and I have some questions. I have a domain name, my own mail server, and a firewall (a PC with two network interfaces, running Devil-Linux). I need to make sure I understand how to set up the Comcast cable box, and how to set up my firewall. First, do I need to change any settings in the cable box? Currently I have only used the cable box by plugging in a laptop, with the laptop doing DHCP. I think I can leave the box alone but I would like to make sure. Second, I'm not sure I understand the instructions Comcast gave me for setting up the firewall. My DSL provider gave me the following information: static IP address, net mask, gateway, and two DNS servers. Comcast gave me: static IP address, routable static IP address, net mask, and two DNS servers, and told me to put the "static IP address" as the "gateway" on the firewall. Is this just Comcast-speak here? Does "routable static IP address" mean the same thing as "static IP address" in my DSL setup, the end-point address that I should publish in the DNS MX records for my email server? Or should I publish the "static IP address", and Comcast will then route all its traffic over the cable box? My plan is: first, I'm going to configure another firewall, so I have one firewall for the DSL and one for the Comcast (rather than madly editing settings to switch back and forth). Then I will publish the new Comcast static IP address as a backup email server address in the DNS MX records, wait a while to let it propagate, and then switch my home over from the DSL to the Comcast. Then I'll change DNS to make that the primary mail address and the DSL the secondary, let that go a while and make sure it seems reliable. Then I'll remove the DSL from the DNS MX records completely, and finally shut down the DSL service. (I thought about keeping the DSL as a backup, but the reason I'm leaving DSL is that it has become unreliable; and I have heard that Comcast business Internet is reliable.) Final question, any advice for me? Anything you think might be useful, helpful, or educational. Thanks.

    Read the article

  • How to automatically remove Flash history/privacy trail? Or stop Flash from storing it?

    - by Arjan van Bentem
    Many people have heard about third-party cookies, and some browsers even block those by default. Some people may even be using Private Browsing modes. However, only few seem to realise that Adobe's Flash player also leaves a cross-browser trail on your local hard drive, and allows for sending cookie-like information back to the server, including third-party sites. And because it is a plugin, Flash does not take any of the browser's privacy settings into account. Sorry for the long post, but first some details about why using Flash raises a privacy concern, followed by the results of my tests: The Flash player keeps a cross-browser history of the domain names of the Flash-sites your computer has visited. Unlike your browser's history, this history is not limited to a certain number of days. History is also recorded while using so-called Private Browsing modes. It is stored on your hard drive (though, as described below, without going to Adobe's site you won't know what is stored). I am not sure if any date and time information is kept about each visit, but to see the domain names: right-click on some Flash content, open the settings dialog, and click the Help icon or click the Advanced button within the Privacy tab. This opens a browser to the help pages on Adobe.com, where one can click through to the Website Storage Settings panel. One can clear the existing list, but one cannot stop it from being recorded again. Flash allows for storing data on your local hard drive, using so-called Local Shared Objects (aka "Flash Cookies"). Just like HTTP cookies, this data can be sent back to the server, for tracking purposes. They are cross-browser, have no expiration date, and no user defined maximum lifetime can be set in the Flash preferences either. These not being HTTP cookies, they are (of course) not blocked by a browser's cookies preferences and are not removed when the normal HTTP cookies are deleted. Adobe has announced that version 10.1 will obey Private Browsing in most popular browsers, but unfortunately no word about also removing the data whenever normal cookies are deleted manually. And its implementation might be confusing: [..] if the browser is in normal browsing mode when the Flash Player instance is created, then that particular instance will forever be in normal browsing mode (private browsing is turned off). Accordingly, toggling private browsing on or off without refreshing the page or closing the private browsing window will not impact Flash Player. Local Shared Objects are not limited to the site you visit, and third-party storage is enabled by default. At the Global Storage Settings panel one can deselect the default Allow third-party Flash content to store data on your computer. Because of the cross-browser and expiration-less nature (and the fact that few people know about it), I feel that the cross-browser third-party Flash Cookies are more dangerous for visitor tracking than third-party normal HTTP cookies. They are even used to restore plain HTTP cookies that the user tried to delete: "All advertisers, websites and networks use cookies for targeted advertising, but cookies are under attack. According to current research they are being erased by 40% of users creating serious problems," says Mookie Tenembaum, founder of United Virtualities. "From simple frequency capping to the more sophisticated behavioral targeting, cookies are an essential part of any online ad campaign. PIE ["Persistent Identification Element"] will give publishers and third-party providers a persistent backup to cookies effectively rendering them unassailable", adds Tenembaum. [..] To justify this tracking mechanism, UV's Tenembaum said, "The user is not proficient enough in technology to know if the cookie is good or bad, or how it works." When selecting None (zero KB) for Specify the amount of disk space that website websites that you haven't yet visited can use to store information on your computer, and checking Never ask again then some sites do not work. However, the same site might work when setting it to None but without selecting Never ask again, and then choose Deny whenever prompted. Both options would result in zero KB of data being allowed, but the behaviour differs. The plugin also provides a Flash Player cache for Adobe-signed files. I guess these files are not an issue. So: how to automatically delete that information? On a Mac, one can find a settings.sol file and a folder for each visited Flash-website in: $HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer/sys/ Deleting the settings.sol file and all the folders in sys, removes the trail from the settings panels. However, the actual Local Shared Ojects are elsewhere (see Wikipedia for locations on other operating systems), in a randomly named subfolder of: $HOME/Library/Preferences/Macromedia/Flash Player/#SharedObjects But then: how to remove this automatically? Simply removing the folders and the settings.sol file every now and then (like by using launchd or Windows' Task Scheduler) may interfere with active browsers. Or is it safe to assume that, given the cross-browser nature, the plugin would not care if things are removed while it is active? Only clearing during log-off may not work for those who hibernate all the time. Firefox users can install BetterPrivacy or Objection to delete the Local Shared Objects (for all others browsers as well). I don't know if that also deletes the trail of website domain names. Or: how to stop Flash from storing a history trail? Change of plans: I'm currently testing prohibiting Flash to write to its own sys and #SharedObjects folders. So far, Flash has not tried to restore permissions (though, when deleting the folders, Flash will of course recreate them). I've not encountered any problems but this may take some while to validate, using multiple browsers and sites. I've not yet found a log that reports errors. On a Mac: cd "$HOME/Library/Preferences/Macromedia/Flash Player/macromedia.com/support/flashplayer" rm -r sys/* chmod u-w sys cd "$HOME/Library/Preferences/Macromedia/Flash Player" # preserve the randomly named subfolders (only preserving the latest would suffice; see below) rm -r \#SharedObjects/*/* chmod -R u-w \#SharedObjects I guess the above chmods cannot be achieved on an old Windows system (I'm not sure about XP and Vista?). Though maybe on Windows one could replace the folders sys and #SharedObjects with dummy files with the same names? Anyone? Obviously, keeping Flash from storing those Local Shared Objects for all sites may cause problems. Some test results (Flash 10 on Mac OS X): When blocking the sys folder (even when leaving the #SharedObjects folder writable) then YouTube won't remember your volume settings while viewing multiple videos. Temporarily allowing write access to the blocked folders while visiting trusted sites (to only create folders for domains you like, maybe including references in settings.sol) solves that. This way, for YouTube, Flash could be allowed to write to sys/#s.ytimg.com and #SharedObjects/s.ytimg.com, while Flash could not create new folders for other domains. One may also need to make settings.sol read-only afterwards, or delete it again. When blocking both the sys and #SharedObjects folders, YouTube and Vimeo work fine (though they might not remember any settings). However, Bits on the Run refuses to even show the video player. This is solved by temporarily unblocking the #SharedObjects folder, to allow Flash to create a subfolder with some random name. Within this folder, it would create yet another folder for the current Flash website (content.bitsontherun.com). Removing that website-specific folder, and blocking both #SharedObjects and the randomly named subfolder, still seems to allow Bits on the Run to operate, even though it still cannot write anything to disk. So: the existence of the randomly named subfolder (even when write protected) is important for some sites. When I first found the #SharedObjects folder, it held many subfolders with random names, some created on the very same day. I wonder when Flash decides it wants a new folder, and how it determines (and remembers) that random name. For a moment I considered not blocking write access for sys and #SharedObjects, but explicitly creating read-only folders for well-known third-party tracking domains (like based on a list from, for example, AdBlock Plus). That way, any other domain could still create Local Shared Objects. But the list would be long, and the domains from AdBlock Plus are probably all third-party domains anyway, so disabling Allow third-party Flash content to store data on your computer might have the very same result. Any experience anyone? (Final notes: if the above links to the settings panels do not work in the future, then use the URL that is known to Flash player as a starting point: www.adobe.com/go/settingsmanager. See also "You Deleted Your Cookies? Think Again" at Wired.com -- which uses Flash cookies itself as well... For the very suspicious using Time Machine: you may want to exclude both folders, for each user, and remove the trace that is already on your backup.)

    Read the article

  • Icinga error "Icinga Startup Delay does not exist" although it does

    - by aaron
    I just installed icinga to monitor my server following this guide: http://docs.icinga.org/0.8.1/en/wb_quickstart-idoutils.html Everything built and installed correctly, but icinga is reporting a critical error with the reason: "The command defined for service Icinga Startup Delay does not exist" However, I can see that ${ICINGA_BASE}/etc/objects/localhost.cfg contains: define service{ use local-service ; Name of service template to use host_name localhost service_description Icinga Startup Delay check_command check_icinga_startup_delay notifications_enabled 0 } and ${ICINGA_BASE}/etc/objects/commands.cfg contains: define command { command_name check_icinga_startup_delay command_line $USER1$/check_dummy 0 "Icinga started with $$(($EVENTSTARTTIME$-$PROCESSSTARTTIME$)) seconds delay | delay=$$(($EVENTSTARTTIME$-$PROCESSSTARTTIME$))" } both of these files had not been modified since the whole make/install process. I am running on Ubuntu 10.04, most recent build of icinga-core, and apache2 2.2.14 What must I do to tell Icinga that the command exists? Or is the problem that check_dummy does not exist? Where or how would I define that?

    Read the article

  • SMTP Verb Error on MSExchange Server 2003

    - by Jason Adams
    Hi, Every morning for the last two weeks or more I've had to reboot our Exchange Server and often I have to reboot it again during the day. We use a smarthost for sending our mail out and if I view the queues on Exhange System Manager the Small Business SMTP Connector is in a retry state with "The connection was dropped due to an SMTP protocol event sink". I turned logging up to maximum on ExchangeTransport and the only non-information event in EventViewer is “Message delivery to the host '62.13.128.187' failed while delivering to the remote domain 'mail.authsmtp.com' for the following reason: The connection was dropped due to an SMTP protocol event sink. The SMTP verb which caused the error is 'x-exps'. The response from the remote server is ''.” I stopped using the smarthost during the error condition and all I got was lots of small business connector connections with the same error. I can telnet into mail.authsmtp.com and send a mail during the error state. Any pointers would be gratefully received.

    Read the article

  • Files Corrupted on System Restore

    - by Yar
    I restored my OSX today by copying the system over from a backup. Most things seem to be working, but every single GIT repo gives pretty much the same error fatal: object 03b45161eb27228914e690e032ca8009358e9588 is corrupted I have tried chowning, doing everything as sudo or root... I have no idea what to try next. This would be a normal git question except that it's on many repos. Ideas? Note: I'm using git 1.7.0.3 and I was probably using 1.7.0 before. Edit: Tried with 1.7.0.2 and it made no difference. Edit: Even when copying any of the repos I get this strange message cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/Pickers/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted

    Read the article

  • How to transition to Comcast with static IP address [migrated]

    - by steveha
    I have my own email server in my house, on a static IP address. I have had business DSL for over a decade, but I also now have Comcast business Internet. I want to transition from the DSL to the Comcast, and I have some questions. I have a domain name, my own mail server, and a firewall (a PC with two network interfaces, running Devil-Linux). I need to make sure I understand how to set up the Comcast cable box, and how to set up my firewall. First, do I need to change any settings in the cable box? Currently I have only used the cable box by plugging in a laptop, with the laptop doing DHCP. I think I can leave the box alone but I would like to make sure. Second, I'm not sure I understand the instructions Comcast gave me for setting up the firewall. My DSL provider gave me the following information: static IP address, net mask, gateway, and two DNS servers. Comcast gave me: static IP address, routable static IP address, net mask, and two DNS servers, and told me to put the "static IP address" as the "gateway" on the firewall. Is this just Comcast-speak here? Does "routable static IP address" mean the same thing as "static IP address" in my DSL setup, the end-point address that I should publish in the DNS MX records for my email server? Or should I publish the "static IP address", and Comcast will then route all its traffic over the cable box? My plan is: first, I'm going to configure another firewall, so I have one firewall for the DSL and one for the Comcast (rather than madly editing settings to switch back and forth). Then I will publish the new Comcast static IP address as a backup email server address in the DNS MX records, wait a while to let it propagate, and then switch my home over from the DSL to the Comcast. Then I'll change DNS to make that the primary mail address and the DSL the secondary, let that go a while and make sure it seems reliable. Then I'll remove the DSL from the DNS MX records completely, and finally shut down the DSL service. (I thought about keeping the DSL as a backup, but the reason I'm leaving DSL is that it has become unreliable; and I have heard that Comcast business Internet is reliable.) Final question, any advice for me? Anything you think might be useful, helpful, or educational. Thanks.

    Read the article

  • Setting up a Windows Server 2008 R2 DC + Fileserver : native or virtual?

    - by user126890
    I want to deploy a new DC + Fileserver using Windows Server 2008 R2 SP1 Standard Edition on a Dell PowerEdge R410 and iSCSI storage for a small business (~30 people). Should I install the system native on the server or use a virt layer? I don't have a budget for virtualization so i gotta go with something free... What's a better working routine, taking snapshots of vm's or taking backups (Acronis/CloneZilla) of systems? If I use a virt system, I need a GUI for some people in the business to reset the system to a earlier state in emergency situations. I wanted to install phpVirtualBox once but never finished, is it suitable in a productive environment? server specs: Intel Xeon E5620 CPU (2,40GHz, 4C, 12MB Cache) 8GB RAM Dual Rank LV RDIMMs 1333MHz 2x 1TB SATA 7,2K 3,5, RAID1

    Read the article

  • how do I match movement of an object from 2d video into a 3d package ?

    - by George Profenza
    I'm trying to add objects in a 3d package(Blender) using recorded footage. I've played with Icarus and it's great to capture the camera movement. Also the Blender 2.41 importer script works in Blender 2.49 as well. The problem is I can't seem to get 3d coordinates for objects. I have tried Autodesk(RealVIZ) MatchMover 2011 and gone through the tutorials. Tutorial 3 shows how to link a vertex from a 3d mesh to a 2d trackpoint, but the setup is for camera movement. Tutorial 4 goes into Motion capture, but it uses 2 videos of the same motion taken with 2 cameras from different viewpoints. I've tried to bypass that using the same footage twice, but that failed, as the 3d coordinate system ends up messed up. What software do you recommend for this (mapping 3d coordinates to 2d tracked points and importing them into a 3d package) ? What is the recommended technique ? Any good examples out there ? Thanks, George

    Read the article

  • Project management, timesheet and planning software

    - by hfidgen
    Hiya, I'm trying to find an integrated PM solution which will give my business all of the following: Timesheeting so we can track time spent on tasks Holiday planner (integrated with timesheet and project management Project management tool, integrating the above, with milestones, gantt chart, dependancies etc. Forecasting ability (nice to have, but not a requirement) Reporting capability - especially time spent on projects, costs etc. Now yeah, that's quite a lot of functionality, I appreciate that! But currently we've got 3 systems, none of which really talk to each other and it's a right headache. So far we've looked at: OpenWorkbench - not enough features Basecamp - not enough features and too reliant on online MS Project - too expensive? Can anyone throw some other hats into the ring which maybe I've not heard about? Really interested to hear how other people have approached this, it's not an unusual business requirement! Thanks!

    Read the article

  • Using Squid on Debian, Cannot Connect Error

    - by Zed Said
    I am trying to set up Squid on Debian and am getting a connection refused error: squidclient http://www.apple.com/ > test client: ERROR: Cannot connect to 127.0.0.1:3128: Connection refused Here is my config: visible_hostname none cache_effective_user proxy cache_effective_group proxy cache_dir ufs /var/spool/squid 2048 16 256 cache_mem 512 MB cache_access_log /var/log/squid/access.log emulate_httpd_log on strip_query_terms off read_ahead_gap 128 Kb collapsed_forwarding on refresh_stale_hit 30 seconds retry_on_error on maximum_object_size_in_memory 1 MB acl all src 0.0.0.0/0.0.0.0 acl purgehosts src 127.0.0.1/255.255.255.255 # Caching static objects in __data is important. # Without that, apache processes sit around spooling static objects. acl QUERY urlpath_regex /cgi-bin/ /_edit /_admin /_login /_nocache /_recache /__lib /__fudge acl PURGE method PURGE acl POST method POST cache deny QUERY cache deny POST http_access allow PURGE purgehosts http_access deny PURGE http_access allow all http_port 127.0.0.1:80 http_port 50.56.206.139:80 cache_peer 127.0.0.1 parent 80 0 originserver no-query no-digest default redirect_rewrites_host_header off read_ahead_gap 128 Kb shutdown_lifetime 5 seconds Any ideas why this is happening? What have I missed?

    Read the article

  • is there software that can fill in forms on the Internet, automatically and if so, what kind of server would work best [closed]

    - by Stevew51
    Possible Duplicate: Firefox Form Fill Add On I have been looking to get into that fill in forms for cash type job/business. I have been searching for software that can do the job automatically. I guess some sort of copy and paste I have been told that there is software for everything. I need software that can fill in forms on the Internet automatically. And if so, what kind of server works best. Not what brand what kind of server.I am not sure if you understand what I am looking for. I am looking for software they can take information from a business site and automatically place it in a form on that same site.I am not asking, what brand I should buy, what kind of server is it.

    Read the article

  • Does anyone provide a Skype connection service?

    - by Runc
    Is there a way of offering Skype access to incoming calls while keeping all telephony traffic over our chosen business telephony provision? Is there a 3rd party who can route incoming Skype calls to our telephone system? The business has had requests from contacts wanting to call us via Skype, but we want to keep all telephony via our PBX and phone lines as our geographic location limits our available internet bandwidth. We also prevent installation of non-standard applications on desktops and do not want to add Skype to our build. I was wondering if there were any 3rd parties that provide a connection service that would allow our contacts to call via Skype and us receive the calls via our phone system.

    Read the article

  • Linked Tables not working With Access Database

    - by Kronass
    Hi, I have an Access database in a computer and I want to access it from other computer in the network. so I made mapped drive and created Linked Tables, then Imported all the objects (forms, queries, reports). when I open access database in the second computer and make any changes in using the forms non of the changes are affected in the main computer (supposedly server) and vise-versa. what am I missing? if this way will not work how can I access the Access database from other computer in the network and be able to use all the objects and make changes in it? hint: Access Version at main computer is 2003 and client pc 2007, will it effect?

    Read the article

  • Postfix + Exchange + ActiveDirectory

    - by itwb
    Client has got many sub-offices, and one head office. Headoffice has a domain name: business.com all users in the many sub-offices need to have a headoffice email address: [email protected] Anyone not in head office will need the email forwarded to an external email address. All users in head office will have their email delivered to exchange. Users are listed in active directory under 2 different OU's. "HeadOffice" or "SubOffice". Is this something able to be configured? I've done some googling, but I can't find any examples or businesses set up this way. Thanks

    Read the article

  • Database which only holds indexes and last X records in memory?

    - by Xeoncross
    I'm looking for a data store that is very memory efficient while still allowing many object changes per second and disregarding ACID compliance for the last X records. I need this database for a server with not much memory and I can make a key-value store, document, or SQL database work. The idea is that indexes/keys are the only thing I need in memory and all the actual values/objects/rows can be saved on disk do to the low read rate (I just want index/key lookup to be fast). I also don't want records constantly being flushed to disk, so I would like the last X number of records to be held in memory so that 100 or so of them can all be written at once. I don't care if I lose the last 10 seconds worth of objects/values. I do care if the database as a whole is in danger of becoming corrupt. Is there a data-store like this?

    Read the article

  • How does java permgen relate to code size

    - by brad
    I've been reading a lot about java memory management, garbage collecting et al and I'm trying to find the best settings for my limited memory (1.7g on a small ec2 instance) I'm wondering if there is a direct correlation between my code size and the permgen setting. According to sun: The permanent generation is special because it holds data needed by the virtual machine to describe objects that do not have an equivalence at the Java language level. For example objects describing classes and methods are stored in the permanent generation. To me this means that it's literally storing my class def'ns etc... Does this mean there is a direct correlation between my compiled code size and the permgen I should be setting? My whole app is about 40mb and i noticed we're using 256mb permgen. I'm thinking maybe we're using memory that could be better allocated to dynamic code like object instances etc...

    Read the article

  • How to change GUI language in Outlook 2007

    - by user1466
    A new guy at work moved in from Denmark, which means that he initially logged in to our Outlook Web Access 2007 from a computer with Danish Windows. As a result, all the objects in the tree-view in Outlook are now in Danish. For example, "Inbox" is called "Indbakke". This prevails, even though he has now logged in locally on his assigned work computer which has English Windows. We're running Exchange 2003, if that matters. How do you change the language of the names of the objects in Outlook 2007? The "Microsoft Office 2007 Language Settings" tool doesn't do this, and I couldn't find anything relevant to this by googling either. In Exchange System Manager there are the "Details Templates" which define these things in different languages, but over on his mailbox there was no configuration option to change which language to use.

    Read the article

  • At what point does Active Directory and Domain Services become necessary? [closed]

    - by user970638
    I see time and time again such services running in a business environment with only 10 users. Everyone in the office authenticates with the DC and interacts with a shared drive where files and documents are stored. I can't help but think...reeeeealy? But I don't know, that's why I'm asking. To me it seems like you need to reach a certain threshold of size and needs before you throw a DC into the mix. ie: a 20+ user business (and growing) with permission requirements that separate the sales team from accounting. Thoughts?

    Read the article

  • Multi Monitor setup goes crazy after locking/unlocking Vista machine

    - by Farseeker
    Give me a 10-blade quad-processor quad-core Opteron centre and ask me to configure failover/load balancing and I'd be happy to, but the following problem has got me completely stumped. My Vista Business Professional machine, running Ultramon, has three monitors attached. When I lock the machine (to go to the delicious cafe around the corner), the monitor layout stays correct. When I unlock it, I watch as all my screens flicker (as they are being re-configured), and Vista chooses some crazy layout for the monitors. The most recent one is below, but it's never consistant. Any ideas what might cause this? It's Vista Business, with UltraMon 3.0 (exiting Ultramon makes no difference).

    Read the article

  • What should we be aware of when moving windows servers to another domain

    - by Klaus Byskov Hoffmann
    Hi everyone, We have a bunch of (virtual vmware) windows servers (2003 and 2008) that our hosting provider wants to move to a new domain. They also want to rename the servers. The hosting provider is in charge of maintaining the servers, while we are in charge of making sure that all our business applications are working. Our business applications include custom developed .net applications using such things as SQLServer 2008, TFS 2010, asp.net, some legacy COM+ apps, etc. To be honest I don't feel too convinced that this migration will be as painless as the hosting provider wants to make it sound. I would greatly appreciate any input on what we should be aware of when discussing the practicalities involved in the migration with the hosting provider. Thanks in advance. Klaus

    Read the article

  • How do I remove Slony from a restored PostgreSQL database?

    - by Scott Herbert
    I've restored a database which came from a server on which Slony was running. The server on which the database has been restored does not have Slony installed. When the database restored, there were a lot of errors reported, with Slony related objects not getting created due to Slony related logins being missing. This I thought was not a problem, as losing the Slony objects didn't seem to matter, and infact seemed desirable. However, now I've got an anoying, if not critical problem. Whenever one clicks on a table in the newly restored DB in PGAdmin, a Slony related error popup ... pops up. The first one reads: "An error has occured: ERROR: function _rmscl.getlocalnodeid(unknown) does not exist" I notice that under the Replication node in PGAdmin, that there is a Slony replication cluster. Trying to drop this cluster results in more object missing type errors. Does anyone have any ideas how we can remove the last vestiges of Slony from this database?

    Read the article

< Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >