Search Results

Search found 21640 results on 866 pages for 'local storage'.

Page 287/866 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • How should I troubleshoot a problematic wireless connection on Linux?

    - by Gearoid Murphy
    I recently purchased a netgear 150 usb wireless dongle for use with my 11.10 Xubuntu amd64 system. Using the network-manager interface, I can see local wireless networks and enter the authentication details for my local wireless lan. Unfortunately, the connection does not seem to work, I keep getting notifications that my wireless has disconnected (but none indicating that I've connected). When I examine syslog, it seems to indicate that I've successfully associated with the wireless switch and that dhcp has successfully acquired an ip address but the log shows that the dhcp process keeps sending requests, eventually dropping the connection. 'ifconfig wlan0' never shows the dhcp address logged in syslog. I suspect that the problem lies with the usb dongle, my configuration or the wireless switch but I am not certain how to isolate the problem, can anyone provide some insight on how I should go about homing in on the cause of this problem or verifying the functionality of the individual components, thanks.

    Read the article

  • rsync doesn't use delta transfer on first run

    - by ockzon
    I'm trying to synchronize a large local directory (with a batch file using rsync 3.0.7 on Cygwin, Windows 7 x64, 30k files, 200gb size) to a remote server (Debian x64 with kernel 2.6, rsyncd 3.0.7) over a slow internet connection (90kbyte/s upload). I know almost all files are identical and I verified that using md5sum locally and remotely. However when executing rsync from my local machine every file gets transferred completely for the first time. When I terminate the batch file after a few transfers and run it again then the already transferred files are skipped. But as soon as it gets to a file not yet transferred it uploads the file as a whole again instead of noticing that the checksum is the same locally and remotely. The batch file calling rsync looks like this (backslashes and line brakes added here for readability): c:\cygwin\bin\rsync.exe --verbose --human-readable --progress --stats \ --recursive --ignore-times --password-file pwd.txt \ /cygdrive/d/ftp/data/ \ rsync://[email protected]:33400/data/ | \ c:\cygwin\bin\tee.exe --append rsync.log I experimented using the following parameters in varying combinations but that didn't help either: --checksum --partial --partial-dir=/tmp/.rsync-partial --compress

    Read the article

  • identifying windows service

    - by András
    On my win7 machine when I watch a movie in XBMC, once per hour the desktop regains the focus. The movie keeps playing, but in the background now, so I can only hear it. It is quite irritating. I noticed, that the Windows Application log always contains two entres for this time: "The description for Event ID 0 from source Self-service Plug-in cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer." One entry is for starting, and one for stopping 60 seconds later. This keeps repeating, every two starts 3600 seconds apart. How can I repair it? How can I find out which service it is?

    Read the article

  • Where can I find a link to download the SP2 of OES2?

    - by Philippe
    Hi, I have a Netware Novell server with an eDirectory and different objects configured. I implemented an OEServer2 SP1 to emulate a DSfW to manage the eDirectory with AD. I join the domain with the Administrator login and I am logged as the Administrator domain. So far, there are no problems. When I open the MMC window on Windows Server 08 and snap in the "Active Directory Users and Computers" I can see all the OUs and objects presented in the Netware N. server. But, when I select some OUs I can have an error, and when I select other I don’t have this error. Error: “Data from XXXXX is not available from Domain Controller OES2.yyyy.local because: The server is unwilling to process the request. Try again later, or choose another DC by selecting Connect to Domain Controller on the Domain context menu.” With XXXX= OU’s name and yyyy.local= domain name and OES2 server name If somebody can upload this SP or post a link to download it... Thank you for your help!

    Read the article

  • Why am I getting 'undefined method' exceptions when executing 'run_list add', 'run_list remove' and 'rackspace server delete'?

    - by Peter Groves
    [Originally posted this to opscode forum, got no response] I’m testing out a free hosted chef-server account and multiple subcommands are failing with ‘Unexpected Errors’. Perhaps my version and the server version are incompatible? OS: Ubuntu 12.04LTS Local Chef: 10.12.0 (Installed through gem) Local Ruby: 1.8.7 Also, the workstation machine has been manually configured, but the client(s) I’ve been experimenting with are launched with the Rackspace plugin (using ‘knife rackspace server create…’) The problem commands seem to fail when talking to the host chef-server, however, before it ever tries to modify the client, so I don’t believe that’s where the problem exists. The virtual-servers that are launched by ‘knife rackspace server create’ are launched properly but then deleting them with knife fails. If I include a recipe in the run_list when I create the server, the recipe is properly added to the run_list. If I try to add it later or remove the one that there server was initialized with, those commands fail. Here is the output of a few relevant commands (with stacktraces): https://gist.github.com/7100ada3fd6690113697

    Read the article

  • Tried to install some software, it says some packages are damaged, cannot fix them

    - by lempira
    So, I go to the Ubuntu Software Center, as soon as it opens, a window pops up with the following text: "Items cannot be installed or removed until the package catalog is repaired. Do you want to repair it now?" Then I click the "Repair" button, then a new window pops up with the following text: "Package operation failed. The installation or removal of a software package failed." Then I click on the "Details" button, which returns me the following text: installArchives() failed: Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: error: 1 expected program not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (2) dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: error: 1 expected program not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. What should I do?

    Read the article

  • Inbound connections using Internet Connection Sharing in Apple/Mac/Leopard

    - by tlianza
    I have a Mac mini which I'm using to give some other devices wireless access, by sharing it's Airport connection with the local ethernet, and that is plugged into a switch. All devices can get online no problem. (See how: http://www.macosxhints.com/article.php?story=20041112101646643 and http://www.macosxhints.com/article.php?story=20071223001432304 ) The issue is that I need to be able to connect in to these machines as well (at least, for the Slingbox to work). All the devices have 192.168.2.* addresses, and the rest of my local network is on 192.168.1.*. I tried setting a static route so that the 192.168.2.* addresses would use a gateway of 192.168.1.50 (my mac mini's address) but that didn't seem to help. Does anyone know if what I'm trying to do is possible? I admit I'm not certain what Internet Connection sharing is really doing under the hood... perhaps it just does basic nat, and doesn't do the type of routing I'm looking for. If so, anyone know if this is possible?

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • Problems installing icinga-web

    - by Kungurov
    I'm using Ubuntu 10.04 LTS (64bit, Server), Apache 2.2.14 Following the instruction from the oficial icinga page http://docs.icinga.org/latest/en/index.html I installed the icinga-web-1.7.1 on my machine and configured a few hosts for test purposes. The Classic Interface runs as expected but the new Web Interface does not show any data. When I try: ps aux | grep ido2db | grep -v grep I get: icinga 27425 0.0 0.0 41464 600 ? Ss Jul27 0:00 /usr/local/icinga/bin/ido2db -c /usr/local/icinga/etc/ido2db.cfg which might indicate a problem with idomod/ido2db because according to the docs there should be at least 2 processes greped. Any ideas how to fix that?

    Read the article

  • How to mount private network shares on login?

    - by bainorama
    I've read all the existing entries I could find on using pam_mount but none of them seem to work for me. I'm trying to automatically mount shares on my local NAS at user login time. The usernames and passwords on my NAS shares match my local user name and password, but there is no LDAP/AD server. My pam_mount.conf has the following: <volume fstype="cifs" server="bain-brain" path="movies" user="*" sgrp="bains" mountpoint="/home/%(USER)/movies" options="user=%(USER),dir_mode=0700,file_mode=700,nosuid,nodev" /> When I login, I see the following in /var/log/auth.log: Oct 13 10:21:26 bad-lattitude lightdm: pam_mount(misc.c:380): 29 20 0:20 / /home/alastairb/movies rw,nosuid,nodev,relatime - cifs //bain-brain/movies rw,sec=ntlm,unc=\\bain-brain\movies,username=alastairb,uid=1000,forceuid,gid=1000,forcegid,addr=10.1.1.12,file_mode=01274,dir_mode=0700,nounix,serverino,rsize=61440,wsize=65536,actimeo=1 The folder /home/alastairb/movies is present but empty (can't see the files which are on the NAS in the respective share folder). In Nautilus, the share is shown in the sidebar under "Computer", and clicking on this takes me to the correct folder, but again, its empty. Any ideas as to what I'm doing wrong?

    Read the article

  • Can not understand this script

    - by Jim
    Can someone help me understand this script? It is from sysconf_add and I am new to scripting. I need to do something similar. function add_word() { local word=$1 local word_quoted=$2 if ! word_present; then $debug && cp $file $tmpf sed -i -e "${lineno} { s/^[[:space:]]*\($var=\".*\)\(\".*\)/\1 $word_quoted\2/; s/=\" /=\"/ }" $file $debug && diff -u $tmpf $file else echo \"$word\" already present fi # some balancing for vim"s syntax highlighting }

    Read the article

  • Chrome refused to execute this JavaScript file

    - by TestSubject528491
    In the head of my HTML page, I have: <script src="https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js"></script> When I load the page in my browser (Google Chrome v 27.0.1453.116) and enable the developer tools, it says: Refused to execute script from 'https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. Indeed, the script won't run. Why does Chrome think this is a plain text file? It clearly has a .js file extension. Since I'm using HTML5, I omitted the type attribute, so I thought that might be causing the problem. So I added type="text/javascript" to the <script> tag, and got the same result. I even tried type="application/javascript" and still got the same error. Then I tried changing it to type="text/plain" just out of curiosity. The browser did not return an error, but of course the JavaScript did not run either. Finally I thought the periods in the filename might be throwing the browser off. So in my HTML code, I changed all the periods to the URL escape character %2E: <script src="https://raw.github.com/cloudhead/less%2Ejs/master/dist/less-1%2E3%2E3.js"></script> This still did not work. The only thing that truly works (i.e. the browser does not give an error and the JS successfully runs) is if I download the file, upload it to a local directory, and then change the src value to the local file. I'd rather not do this since I'm trying to save space on my own website. How do I get Chrome to recognize that the linked file is actually a JavaScript type?

    Read the article

  • rsnapshot intervals in configuration file...

    - by Patrick
    A simple question about rsnapshot. In order to perform daily backups I'm going to add lines to cron in my Ubuntu. Then, why do I have also these lines in the rsnapshot.conf ? ######################################### # BACKUP INTERVALS # # Must be unique and in ascending order # # i.e. hourly, daily, weekly, etc. # ######################################### interval hourly 6 interval daily 7 interval weekly 4 #interval monthly 3 If I use cron, should I disable them ? thanks ps. I've just realized that in the crontab I still have "hourly" and "daily". Should I then uncomment only the one I use in the crontab ? And what's the point to specify hourly if it is already specified in cron ? I'm a bit confused. # crontab -e 0 */4 * * * /usr/local/bin/rsnapshot hourly 30 23 * * * /usr/local/bin/rsnapshot daily

    Read the article

  • Git repo: Unravelling my mess into tidy branches

    - by Martin
    I wanted to play with a project, so git cloned it and, following its instructions, created a local branch for my configuration (I guess so that users can merge updates back). At first I was just tweaking to suit my preferences, so I didn't bother with any further branching, but now I have some code that might be useful to someone else, but with my passwords, etc in the same branch. Effectively, I have one big branch from which I'd like to have: Postgres backend (default) but with some new code I've added MySQL backend (the biggest change I've made) with that same new code My settings: I can't git ignore the settings file because I occasionally have to add sections for new functionality, but I need to keep my personal settings out of the public branches! I guess this would work best as a local-only branch. Dev branches, which I would branch from the MySQL. Starting from scratch, I think I could figure out how to branch/merge the various updates, but is there an easy way to walk through the existing repo and choose which commits to apply to which branch? Or possibly create a branch from a point upstream then merge back, excluding certain commits?

    Read the article

  • Own hosting, own domain on Virtualbox Ubuntu server - dynamic IP

    - by Pawel
    Hi, I have bought my TLD domain, say domain.com. What I'd like to do is to host the website available under this address on my own computer. I'm assigned a dynamic IP, and I'm also behind a router in my local network. I'd run sites on my ubuntu server on a virtualbox machine which is available in my local network. Preferably, I'd like to have my own domain on some server, with which I could experiment as much as I need (so it's only for educational purposes), but I can't afford to buy such a service. Is it feasible? Can you provide steps I'd need to take to configure it (could be just general explanation). I'd need some guidance, please.

    Read the article

  • windows 2008 server move users to new server

    - by moos3
    I have a new server that is replacing a current windows 2008 server r2. I want to move all the local users and IIS sites to the new box. Is there away to export the two and import them on the new box? I have sync'd all the files for all the sites to the new box. This box doesn't belong to a domain so its not a matter of joining to the domain. The users I'm talking about are the local computer users.

    Read the article

  • Explaining Git to someone new to revision control

    - by MaxMackie
    I've recently decided to jump into the whole world of revision control to work on some open source projects I have. I looked around (subversion, mercurial, git, etc) and found that Git seemed to make more sense conceptually to me. I've set everything up on my computer (opensuse) and made an account on gitorious (let me know if there is a more simple/better hosting provider). I understand Git from a conceptual point of view (work locally, commit to a local repo, others can now checkout from you, right?). But where does gitorious come into play? I commit to them as well as committing locally? Apart from conceptually, I don't quite understand HOW it works when it comes to making a local repository and running git init inside a folder and that HEAD file. Keep in mind I have never used any form of revision control ever before. So even the most basic concepts are foreign to me. As I post this, I'm also reading up and trying to figure it out myself.

    Read the article

  • How to remove SelectionLinks extension from Chrome on Windows?

    - by Faustas
    It's not possible to remove it using the standard extension disablement/removal features in Chrome - the checkbox is disabled. I also found that the extension gets installed under C:\Users\\AppData\Local\Google\Chrome\User Data\Default\Extensions\ - tried to delete it there, but it's still active. In fact, I tried deleting the whole C:\Users\\AppData\Local\Google\Chrome\User Data\ directory, but the next time Chrome starts, it recreates it and the extension gets recreated. Seems like there is something running in Windows that keeps detecting that Chrome extension is not there anymore and reinstates it. Any ideas how to get rid of it?

    Read the article

  • Moving StarterSTS to the (Azure) Cloud

    - by Your DisplayName here!
    Quite some people asked me about an Azure version of StarterSTS. While I kinda knew what I had to do to make the move, I couldn’t find the time. Until recently. This blog post briefly documents the necessary changes and design decisions for the next version of StarterSTS which will work both on-premise and on Azure. Provider Fortunately StarterSTS is already based on the idea of “providers”. Authentication, roles and claims generation is based on the standard ASP.NET provider infrastructure. This makes the migration to different data stores less painful. In my case I simply moved the ASP.NET provider database to SQL Azure and still use the standard SQL Server based membership, roles and profile provider. In addition StarterSTS has its own providers to abstract resource access for certificates, relying party registration, client certificate registration and delegation. So I only had to provide new implementations. Signing and SSL keys now go in the Azure certificate store and user mappings (client certificates and delegation settings) have been moved to Azure table storage. The one thing I didn’t anticipate when I originally wrote StarterSTS was the need to also encapsulate configuration. Currently configuration is “locked” to the standard .NET configuration system. The new version will have a pluggable SettingsProvider with versions for .NET configuration as well as Azure service configuration. If you want to externalize these settings into e.g. a database, it is now just a matter of supplying a corresponding provider. Moving between the on-premise and Azure version will be just a matter of using different providers. URL Handling Another thing that’s substantially different on Azure (and load balanced scenarios in general) is the handling of URLs. In farm scenarios, the standard APIs like ASP.NET’s Request.Url return the current (internal) machine name, but you typically need the address of the external facing load balancer. There’s a hotfix for WCF 3.5 (included in v4) that fixes this for WCF metadata. This was accomplished by using the HTTP Host header to generate URLs instead of the local machine name. I now use the same approach for generating WS-Federation metadata as well as information card files. New Features I introduced a cache provider. Since we now have slightly more expensive lookups (e.g. relying party data from table storage), it makes sense to cache certain data in the front end. The default implementation uses the ASP.NET web cache and can be easily extended to use products like memcached or AppFabric Caching. Starting with the relying party provider, I now also provide a read/write interface. This allows building management interfaces on top of this provider. I also include a (very) simple web page that allows working with the relying party provider data. I guess I will use the same approach for other providers in the future as well. I am also doing some work on the tracing and health monitoring area. Especially important for the Azure version. Stay tuned.

    Read the article

  • Best Practices in Preventing Locks in File sharing in Windows?

    - by crosenblum
    We have a web development server, that has our source control on it. That we use via mapped drives to open/edit/save/create files on to it. But I have noticed a lot of weird glitches with either the file lock/sharing/etc For example, if i have a file open on my local pc, and try to check out that file via my source control on dev, it error's out. The source control I use is QCVS by Qumasoft. Or If I use cuteftp to download something to dev, via my local pc, it won't download unless I close that file. So my brain sniff's and figures this is either an issue with folder/network/sharing, or an issue of permissions. Has anyone else had similar issues, and what did they do to fix it permanently? Thank you...

    Read the article

  • Why my webservice not live on internet? [closed]

    - by blankon91
    I've windows server that goes live on internet, (e.g. www.mysite.com). Then I want to create another site with different port (e.g. www.mysite.com:502). I've create that and it works when I access it on local network, but when I access it from outside of local network (internet) the www.mysite.com:502 can't accessed but the www.mysite.com can accessed. what should I do to make www.mysite.com:502 goes online? I use windows server 2008 standard

    Read the article

  • SSH connection falling down

    - by kappa
    I've set up a connection with autossh that creates some tunnels at system startup, but if I try to connect, after successful login (with RSA key) connection fall down, here a trace: debug1: Authentication succeeded (publickey). debug1: Remote connections from LOCALHOST:5006 forwarded to local address localhost:22 debug1: Remote connections from LOCALHOST:6006 forwarded to local address localhost:80 debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: remote forward success for: listen 5006, connect localhost:22 debug1: remote forward success for: listen 6006, connect localhost:80 debug1: All remote forwarding requests processed debug1: Sending environment. debug1: Sending env LANG = it_IT.UTF-8 debug1: Sending env LC_CTYPE = en_US.UTF-8 debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0 debug1: channel 0: free: client-session, nchannels 1 Transferred: sent 2400, received 2312 bytes, in 1.3 seconds Bytes per second: sent 1904.2, received 1834.4 debug1: Exit status 1 What can be the problem? All this stuff is managed by a script already running on another machine (creating reverse tunnels on the same machine but with different ports)

    Read the article

  • Name of my sql server instance from outside the network

    - by Michel
    Hi, normally i connect to my sql server instance from my local computer, and then the server name to connect to is the name of my laptop. So i can connect to server instance 'MichelLaptop' But now i'm trying to connect to my server from outside my network, and the first thing i wonder is: what is the name of the instance? i've made a redirect to my local machine in the DNS of my domain, so i said (this is not the real data) testsql.mydomain.com goes to 190.191.192.193 and when i ping testsql.mydomain.com, i get a response from 190.191.192.193 But what then is the server name?

    Read the article

  • Missing menu items for Azure SQL tables within SQL Server Management Studio?

    - by Sid
    I have a table (say Table1) that is replicated via SQL Data Sync Agent across a local SQL Server 2012 as well as an Azure SQL Server (part of Microsoft Azure). Everything about Table1 (schema, table values etc ) is identical to the best of my understanding. However, when I list and right click Table1 from Microsoft SQL Server Management Studio 2012 (SSMS), I get some very different menu options, even for seemingly basic stuff. Lets focus only on the 'Design' menu item: It is visible for Table1 on the local SQL server in SSMS It is missing for Table1 on Azure SQL via SSMS It is visible for Table1 (as Open Table Definition) on Azure SQL when reaching it via Visual Studio 2012 (Server Explorer - Data connections) This is seen in the screenshots below: Now I use scripts from some real stuff (esp when I need to check in the SQL scripts etc) but this difference concerns me to some extent. Am I witnessing just a tools artifact in SQL Server Management Studio when connecting to Azure SQL? or is it something more serious about limitations of Azure SQL itself (although, just seeing the Design surface is so basic!)?

    Read the article

  • All terminal commands (like ls, cd, edit, open) are returning errors on my Mac

    - by park
    From what I can tell from reading other questions/answers is that my .bash_profile file may be corrupt. If I type echo $PATH in terminal the result is: /usr/local/git/bin From what I've read, that's not what the result is supposed to be. But I also can't get any of the commands (like edit or subl, for Sublime Text 2) to open the .bash_profile file to edit it. I was able to open the file in TextEdit using "cmd-shift-.", and here's what's in the file: [[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" PATH=$PATH:~/bin export PATH export PATH=/usr/local/git/bin But the file is LOCKED, so I can't edit it there either. I'm very new to programming and in the middle of trying to install everything on my Mac to go through a Ruby on Rails tutorial. I can't even check my version of ruby, since even ruby -v returns -bash: ruby: command not found Any help would be greatly appreciated. Thanks.

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >