Search Results

Search found 8896 results on 356 pages for 'jason block'.

Page 111/356 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Dante (SOCKS server) not working

    - by gregmac
    I'm trying to set up a SOCKS proxy using dante for testing purposes. However, I can't even get it to work with a web browser, after looking at several tutorials on how to do that. I've tried in both IE and Firefox, in both cases, using "Manual proxy configuration", leave everything blank except for SOCKS host, and then put in the IP of my proxy and the port number (1080). I just get "Server not found" / "Problems loading this page" and don't see anything in danted, even running in debug mode. If I do a "telnet 10.0.0.40 1080" I do see the connection open in danted debug output, so I know that much is working. Here's my config: logoutput: stdout /var/log/danted/danted.log internal: eth0 port = 1080 external: eth0 method: username none #rfc931 user.privileged: proxy user.notprivileged: nobody user.libwrap: nobody connecttimeout: 30 # on a lan, this should be enough if method is "none". client pass { from: 10.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } client pass { from: 127.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } client block { from: 0.0.0.0/0 to: 0.0.0.0/0 log: connect error } block { from: 0.0.0.0/0 to: 127.0.0.0/8 log: connect error } pass { from: 10.0.0.0/8 to: 0.0.0.0/0 protocol: tcp udp } pass { from: 127.0.0.0/8 to: 0.0.0.0/0 protocol: tcp udp } block { from: 0.0.0.0/0 to: 0.0.0.0/0 log: connect error } I'm sure I'm probably missing something simple, but I'm lost. I haven't even thought about SOCKS since the late 90's.

    Read the article

  • OpenVPN IPV6 Tunnel Radvd

    - by Arenstar
    Hello.. I have an interesting question regarding ipv6 + openvpn.. My Version is OpenVPN 2.1.1 i have been given a native /64 ipv6 network ( for this example 2001:acb:132:acb::/64 ) The plan was/is, route this block through openvpn and into an office ( for testing purposes ) Soo to explain.. I have a Centos Box as the first linux "router" in a datacenter & a Ubuntu box as the second linux "router" in the office I have created a simple point-to-point tunnel using tun ( based off ipv4 address to start the tunnel ) I have assigned to Centos /sbin/ip addr add fed1::1/128 dev eth0 /sbin/ip addr add fed2::2/128 dev tun0 /sbin/ip route add 2001:acb:132:acb::/64 dev tun0 ## ipv6 Block down the tunnel /sbin/ip route add ::/0 dev eth0 ## Default out to Gateway I have assigned to Ubuntu /sbin/ip addr add fed1::3/128 dev tun0 /sbin/ip addr add fed1::4/128 dev eth0 /sbin/ip route add 2001:acb:132:acb::/64 dev eth0 ## ipv6 Block down to eth0 /sbin/ip route add ::/0 dev tun0 ## Default up the tunnel I have also included on both servers.. sysctl -w net.inet6.ip6.forwarding=1 Looks Good... right??? Wrong.. :( I am not able to ping fed1::1 from fed1::4 (Ubuntu) (can ping :4,:3,:2) However, i can ping fed1::1 fed1::2 from :3 ?????? ( very strange ) I am able to access the internet from any ipv6 interface on the Centos Box but clearly not from the Ubuntu box.. Further, i will eventually run radvd on the Ubuntu box eth0, and autoconf the network with ipv6 address's Anyone with some advice / tips to help me out.. ??? Cheers

    Read the article

  • Windows Firewall Software to Filter Transit Traffic

    - by soonts
    I need to test my networking code for Nintendo Wii under the conditions when some specific Internet server is not available. Wii is connected to my PC with crossover ethernet cable. PC has 2 NICs. PC is connected to hardware router with ethernet cable. The hardware router serves as NAT and has an internet connected to its uplink. I set the Wii to be in the same lan as PC by using Windows XP Network bridge. I can observe the WII network traffic using e.g. Wireshark sniffer. Is there a software firewall that can selectively filter out transit traffic? (e.g. block outgoing TCP connections to 123.45.67.89 to port 443) I tried Outpost Pro 2009 and Comodo. Outpost firewall blocks all transit traffic with it's implicit "block transit packet" rule. If the transit traffic is explicitly allowed by creating the system-wide low level rule, then it's allowed completely and no other filter can selectively block it. Comodo firewall only process rules when the packet has localhost's IP as either source or destination, allowing the rest of the traffic. Any ideas? Thanks in advance! P.S. Platform is Windows XP 32 bit, no other OSes is allowed, Windows ICS (Internet Connection Sharing) doesnt work since the Wii is unable to connect, becides I don't like the idea of adding one more level of NAT.

    Read the article

  • Bypassing SQUID on freebsd with PF

    - by epema
    I have PF+SQUID31 on FREEBSD-9.0, and I want to have some hosts(aka goodguys) to bypass the proxy, so that torrents are not logged. Also, I am not sure about transparent. It means that I dont have to configure proxy settings on the client side right? I have tried doing a redirect no rdr on $int_if inet proto {tcp,udp} from 192.168.1.233/32 to any However, no luck :( Here is a quick look of my conf files: SQUID /usr/local/etc/squid/squid.conf http_port 192.168.1.1:8080 transparent RC /etc/rc.conf: gateway_enable="YES" pf_enable="YES" pf_rules="/usr/local/etc/pf.conf" pflog_enable="YES" squid_enable="YES" I have squid31 installed from ports with SQUID_PF "Enable transparent proxying with PF" on PF /usr/loca/etc/pf.conf: int_if="re0" ext_if="bge0" localnet="{ 192.168.1.0/24 }" table <goodguys> const { "192.168.1.219", "192.168.1.233" } set block-policy drop set skip on lo0 scrub in all fragment reassemble scrub out all random-id max-mss 1440 block in on $ext_if pass out on $ext_if keep state block in on $int_if pass in on $int_if inet proto tcp from $int_if:network to $int_if port 8080 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 21 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 22 keep state pass in on $int_if inet proto udp from $int_if:network to $int_if port 53 keep state pass in on $int_if inet proto tcp from $int_if:network to any port { smtp, pop3 } keep state pass in on $int_if inet proto icmp from $int_if:network to $int_if keep state pass out on $int_if keep state What lines should I add in conf files? I am assuming that the problem is on the firewall(pf).

    Read the article

  • Blocking HTTPS and P2P Traffic

    - by Genboy
    I have a Debian server running at the gateway level on a LAN. This runs squid for creating block lists of websites - for eg. blocking social networking on the LAN. Also uses iptables. I am able to do a lot of things with squid & iptables, but a few things seem difficult to achieve. 1) If I block facebook through their http url, people can still access https://www.facebook.com because squid doesn't go through https traffic by default. However, if the users set the gateway IP address as proxy on their web browser, then https is also blocked. So I can do one thing - using iptables drop all outgoing 443 traffic, so that people are forced to set proxy on their browser in order to browse any HTTPS traffic. However, is there a better solution for this. 2) As the number of blocked urls increase in squid, I am planning to integrate squidguard. However, the good squidguard lists are not free for commercial use. Anyone knows of a good squidguard list which is free. 3) Block yahoo messenger, gtalk etc. There are so many ports on which these Instant Messenger softwares work. You need to drop lots of outgoing ports in iptables. However, new ports get added, so you have to keep adding them. And even if your list of ports is current, people can still use the web version of gtalk etc. 4) Blocking P2P. Haven't been able to figure out how to do this till now.

    Read the article

  • Apache2 VirtualHosts 403 Oddity

    - by Carson C.
    I'm sure this is something I should already understand, but I'm finding myself confused. The configs in play add up to this: NameVirtualHost *:80 Listen 80 <VirtualHost *:80> <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName domain.tld ServerAlias *.domain.tld DocumentRoot /var/www/domain.tld <Directory /var/www/domain.tld> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> DNS is working correctly. The issue is, every variant of http://*.domain.tld/ (including http://domain.tld/) works correctly, except http://www.domain.tld/ which throws a 403. The logs state: client denied by server configuration: /etc/apache2/htdocs If I remove the first VirtualHost block from play, everything works as expected including http://www.domain.tld. This leads me to believe that for some reason, Apache is not considering www.domain.tld to match the second VirtualHost block, and is thereby falling back to deny all. This seems wrong. Shouldn't the second block match www.domain.tld? I've been able to resolve this, but I still don't understand why. In my original configs, I was using the real ip address of the server instead of *. Switching all instances to * as shown above made everything work as expected. Does this have something to do with the way browsers request resources?

    Read the article

  • "could not find suitable fingerprints matched to available hardware" error

    - by Alex
    I have a thinkpad t61 with a UPEK fingerprint reader. I'm running ubuntu 9.10, with fprint installed. Everything works fine (I am able to swipe my fingerprint to authenticate any permission dialogues or "sudo" prompts successfully) except for actually logging onto my laptop when I boot up or end my session. I receive an error below the gnome login that says "Could not locate any suitable fingerprints matched to available hardware." What is causing this? here are the contents of /etc/pam.d/common-auth file # # /etc/pam.d/common-auth - authentication settings common to all services # # This file is included from other service-specific PAM config files, # and should contain a list of the authentication modules that define # the central authentication scheme for use on the system # (e.g., /etc/shadow, LDAP, Kerberos, etc.). The default is to use the # traditional Unix authentication mechanisms. # # As of pam 1.0.1-6, this file is managed by pam-auth-update by default. # To take advantage of this, it is recommended that you configure any # local modules either before or after the default block, and use # pam-auth-update to manage selection of other modules. See # pam-auth-update(8) for details. # here are the per-package modules (the "Primary" block) auth sufficient pam_fprint.so auth [success=1 default=ignore] pam_unix.so nullok_secure # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_ecryptfs.so unwrap # end of pam-auth-update config #auth sufficient pam_fprint.so #auth required pam_unix.so nullok_secure

    Read the article

  • Where do vendors publish internal transfer rates of HDDs?

    - by red888
    So I've started to dig into storage fundamentals and found that in order to calculate the IOPS of a HDD you need to know the internal transfer rate of the drive (time it takes data to move from the platters to internal disk's cache). I went on newegg and even a few vendor sites and could not find this info published for any HDDs. Is it sometimes called something else? Take this link to a seagate HDD for instance. Nowhere do I see "internal transfer rate", but I do see something called "Sustained Data Rate OD"- is that the same thing? Just so you know where I'm getting this info (Book: "Information Storage and Management Storing, Managing..."): Consider an example with the following specifications provided for a disk: The average seek time is 5 ms in a random I/O environment; therefore, T = 5 ms. Disk rotation speed of 15,000 revolutions per minute or 250 revolutions per second — from which rotational latency (L) can be determined, which is one-half of the time taken for a full rotation or L = (0.5/250 rps expressed in ms). 40 MB/s internal data transfer rate, from which the internal transfer time (X) is derived based on the block size of the I/O — for example, an I/O with a block size of 32 KB; therefore X = 32 KB/40 MB. Consequently, the time taken by the I/O controller to serve an I/O of block size 32 KB is (TS) = 5 ms + (0.5/250) + 32 KB/40 MB = 7.8 ms. Therefore, the maximum number of I/Os serviced per second or IOPS is (1/TS) = 1/(7.8 × 10^-3) = 128 IOPS.

    Read the article

  • Routing public IPs (each a /32) through a VPN to another server

    - by Lee S
    Hopefully the title makes sense; I have a server currently in a colo facility, with many IP addresses routed to it. They are individual IPs and not in a contiguous block. Due to vastly improved connectivity (fibre) at home I am slowly bringing my infrastructure in-house for managability and eventually, cost savings. What I would like to do though is use the IP addresses allocated to my existing server, at home. I have an IP block allocated to me on my new ISP connection, but for a couple of reasons I'd like to make use of the colo ones for now: Ease of transition - lots of domains, dns, hard-coded IPs in programs, etc. Connectivity fallback. If my primary line goes down and switches to fallback 1 (dsl) or fallback 2 (4G), I lose access to the ISP-allocated IP block of IPs that are only presented on the primary WAN interface. What I'd like to achieve is my home virtualisation server (Proxmox/Debian-based) "dials in" to the colo server in the colo facility (also Proxmox/Debian) via VPN or similar, and gets to make use of the IP addresses that currently terminate on the colo box. If the primary connection to my ISP goes down and one of the fallback routes kicks in, the VPN tunnel will just time out and then be re-established on the backup connection instead. I'm sure this is doable, but I have no idea how. I'm not afraid to get my hands dirty, I just don't really know where to start?

    Read the article

  • IIS Request Filtering Rule for User Agent

    - by alexp
    I'm trying to block requests from a certain bot. I've added a request filtering rule, but I know it is still hitting the site because it shows up in Google Analytics. Here is the filtering rule I added: <security> <requestFiltering> <filteringRules> <filteringRule name="Block GomezAgent" scanUrl="false" scanQueryString="false"> <scanHeaders> <add requestHeader="User-Agent" /> </scanHeaders> <denyStrings> <add string="GomezAgent+3.0" /> </denyStrings> </filteringRule> </filteringRules> </requestFiltering> </security> This is an example of the user agent I'm trying to block. Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:13.0;+GomezAgent+3.0)+Gecko/20100101+Firefox/13.0.1 In some ways it seems to work. If I use Chrome to spoof my user agent, I get a 404, as expected. But the bot traffic is still showing up in my analytics. What am I missing?

    Read the article

  • When using software RAID and LVM on Linux, which IO scheduler and readahead settings are honored?

    - by andrew311
    In the case of multiple layers (physical drives - md - dm - lvm), how do the schedulers, readahead settings, and other disk settings interact? Imagine you have several disks (/dev/sda - /dev/sdd) all part of a software RAID device (/dev/md0) created with mdadm. Each device (including physical disks and /dev/md0) has its own setting for IO scheduler (changed like so) and readahead (changed using blockdev). When you throw in things like dm (crypto) and LVM you add even more layers with their own settings. For example, if the physical device has a read ahead of 128 blocks and the RAID has a readahead of 64 blocks, which is honored when I do a read from /dev/md0? Does the md driver attempt a 64 block read which the physical device driver then translates to a read of 128 blocks? Or does the RAID readahead "pass-through" to the underlying device, resulting in a 64 block read? The same kind of question holds for schedulers? Do I have to worry about multiple layers of IO schedulers and how they interact, or does the /dev/md0 effectively override underlying schedulers? In my attempts to answer this question, I've dug up some interesting data on schedulers and tools which might help figure this out: Linux Disk Scheduler Benchmarking from Google blktrace - generate traces of the i/o traffic on block devices Relevant Linux kernel mailing list thread

    Read the article

  • "Unverifiable code failed policy check" for a closed source assembly

    - by Jason
    I'm attempting to dynamically load some (purchased) assemblies from resource streams in a C# program during an MSI installation routine, but I'm getting "Unverifiable code failed policy check". I read some tips online about compiling the embedded assembly with /clr:safe, but I don't have that option. Is there a way to work around this policy check? Thanks.

    Read the article

  • WPF Dynamic Layout with ItemsControl and Grid

    - by Jason Williams
    I am creating a WPF form. One of the requirements is that it have a sector-based layout so that a control can be explicitly placed in one of the sectors/cells. I have created a tic-tac-toe example below to convey my problem: There are two types and one base type: public class XMoveViewModel : MoveViewModel { } public class OMoveViewModel : MoveViewModel { } public class MoveViewModel { public int Row { get; set; } public int Column { get; set; } } The DataContext of the form is set to an instance of: public class MainViewModel : ViewModelBase { public MainViewModel() { Moves = new ObservableCollection<MoveViewModel>() { new XMoveViewModel() { Row = 0, Column = 0 }, new OMoveViewModel() { Row = 1, Column = 0 }, new XMoveViewModel() { Row = 1, Column = 1 }, new OMoveViewModel() { Row = 0, Column = 2 }, new XMoveViewModel() { Row = 2, Column = 2} }; } public ObservableCollection<MoveViewModel> Moves { get; set; } } And finally, the XAML looks like this: <Window.Resources> <DataTemplate DataType="{x:Type vm:XMoveViewModel}"> <Image Source="XMove.png" Grid.Row="{Binding Path=Row}" Grid.Column="{Binding Path=Column}" Stretch="None" /> </DataTemplate> <DataTemplate DataType="{x:Type vm:OMoveViewModel}"> <Image Source="OMove.png" Grid.Row="{Binding Path=Row}" Grid.Column="{Binding Path=Column}" Stretch="None" /> </DataTemplate> </Window.Resources> <Grid> <ItemsControl ItemsSource="{Binding Path=Moves}"> <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <Grid ShowGridLines="True"> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition /> <RowDefinition /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition /> <ColumnDefinition /> </Grid.ColumnDefinitions> </Grid> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> </ItemsControl> </Grid> What was not so obvious to me when I started was that the ItemsControl element actually wraps each item in a container, so my Grid.Row and Grid.Column bindings are ignored since the images are not directly contained within the grid. Thus, all of the images are placed in the default Row and Column (0, 0). What is happening: The desired result: So, my question is this: how can I achieve the dynamic placement of my controls in a grid? I would prefer a XAML/Data Binding/MVVM-friendly solution. Thanks.

    Read the article

  • Blocking access to websites with objective-C / root privileges in objective-C

    - by kvaruni
    I am writing a program in Objective-C (XCode 3.2, on Snow Leopard) that is capable of either selectively blocking certain sites for a duration or only allow certain sites (and thus block all others) for a duration. The reasoning behind this program is rather simple. I tend to get distracted when I have full internet access, but I do need internet access during my working hours to get to a number of work-related websites. Clearly, this is not a permanent block, but only helps me to focus whenever I find myself wandering a bit too much. At the moment, I am using a Unix script that is called via AppleScript to obtain Administrator permissions. It then activates a number of ipfw rules and clears those after a specific duration to restore full internet access. Simple and effective, but since I am running as a standard user, it gets cumbersome to enter my administrator password each and every time I want to go "offline". Furthermore, this is a great opportunity to learn to work with XCode and Objective-C. At the moment, everything works as expected, minus the actual blocking. I can add a number of sites in a list, specify whether or not I want to block or allow these websites and I can "start" the blocking by specifying a time until which I want to stay "offline". However, I find it hard to obtain clear information on how I can run a privileged Unix command from Objective-C. Ideally, I would like to be able to store information with respect to the Administrator account into the Keychain to use these later on, so that I can simply move into "offline" mode with the convenience of clicking a button. Even more ideally, there might be some class in Objective-C with which I can block access to some/all websites for this particular user without needing to rely on privileged Unix commands. A third possibility is in starting this program with root permissions and the reducing the permissions until I need them, but since this is a GUI application that is nested in the menu bar of OS X, the results are rather awkward and getting it to run each and every time with root permission is no easy task. Anyone who can offer me some pointers or advice? Please, no security-warnings, I am fully aware that what I want to do is a potential security threat.

    Read the article

  • Entity Framework 4 mapping fragment error when adding new entity scalar

    - by Jason Morse
    I have an Entity Framework 4 model-first design. I create a first draft of my model in the designer and all was well. I compiled, generated database, etc. Later on I tried to add a string scalar (Nullable = true) to one of my existing entities and I keep getting this type of error when I compile: Error 3004: Problem in mapping fragments starting at line 569: No mapping specified for properties MyEntity.MyValue in Set MyEntities. An Entity with Key (PK) will not round-trip when: Entity is type [MyEntities.MyEntity] I keep having to manually open the EDMX file and correct the XML whenever I add scalars. Ideas on what's going on?

    Read the article

  • Getting Exception thrown and not caught error on jquery ui tabs in ie8

    - by Jason
    I am getting the following error (pointing to jquery-1.4.2.js): Message: Exception thrown and not caught Line: 2904 Char: 2 Code: 0 With the following: IE8 jquery 1.4.2 jquery ui 1.8.1 When I do the following: $("#theTabs").tabs(); On the same page I also have two instances of the jquery ui dialog and one instance of the jquery ui accordion. Am I missing something? This does not happen in FF on Windows (nor in Safari or FF on OS X) I use the same code elsewhere for tabs and they work just fine.

    Read the article

  • Magento - Fatal error: Class name must be a valid object or a string

    - by Jason Millward
    I'm having a problem with a Magento installation that I hope someone can help me with. I suddenly started getting the following error message when I accessed the site: Fatal error: Class name must be a valid object or a string in /app/code/core/Mage/Core/Model/Resource.php on line 215 I've searched for someone with a similar issue but not had any luck so i'm stuck and really need to get this resolved Can anyone help?

    Read the article

  • Windows LiveID "Couldn't sign you out" error at sign-out

    - by Jason
    I'm implementing LiveID authentication on my website. I've done it before, but not on this particular platform, MojoPortal. The sign-in works properly, but when I attempt to sign-out, I get the error message quoted below. My browser is not blocking cookies. I get the same message when logging in to and out of, say, MSDN with a LiveID too now. I can't figure out if there's something about my site's programming that is interfering with the sign-out process of LiveID (since I believe that all (recent?) websites get sent a sign-out command) OR if live.com is just having issues lately and this is a coincidence. Couldn't sign you out We couldn't sign you out because your browser is blocking cookies. To sign out, close all of your browser windows. To keep this from happening again, change your browser's settings to allow cookies. If you don't know how to do that, see your browser's help.

    Read the article

  • Calling C# object method from IronPython

    - by Jason
    I'm trying to embed a scripting engine in my game. Since I'm writing it in C#, I figured IronPython would be a great fit, but the examples I've been able to find all focus on calling IronPython methods in C# instead of C# methods in IronPython scripts. To complicate things, I'm using Visual Studio 2010 RC1 on Windows 7 64 bit. IronRuby works like I expect it would, but I'm not very familiar with Ruby or Python syntax. What I'm doing: ScriptEngine engine = Python.CreateEngine(); ScriptScope scope = engine.CreateScope(); //Test class with a method that prints to the screen. scope.SetVariable("test", this); ScriptSource source = engine.CreateScriptSourceFromString("test.SayHello()", Microsoft.Scripting.SourceCodeKind.Statements); source.Execute(scope); This generates an error, "'TestClass' object has no attribute 'SayHello'" This exact set up works fine with IronRuby though using "self.test.SayHello()" I'm wary using IronRuby though because it doesn't appear as mature as IronPython. If it's close enough, I might go with that though. Any ideas? I know this has to be something simple.

    Read the article

  • MSBuild Script and VS2010 publish apply Web.config Transform

    - by Jason
    So, I have VS 2010 installed and am in the process of modifying my MSBuild script for our TeamCity build integration. Everything is working great with one exception. How can I tell MSBuild that I want to apply the Web.conifg transform files that I've created when I publish the build... I have the following which produces the compiled web site but, it outputs a Web.config, Web.Debug.config and, Web.Release.config files (All 3) to the compiled output directory. In studio when I perform a publish to file system it will do the transform and only output the Web.config with the appropriate changes... <Target Name="CompileWeb"> <MSBuild Projects="myproj.csproj" Properties="Configuration=Release;" /> </Target> <Target Name="PublishWeb" DependsOnTargets="CompileWeb"> <MSBuild Projects="myproj.csproj" Targets="ResolveReferences;_CopyWebApplication" Properties="WebProjectOutputDir=$(OutputFolder)$(WebOutputFolder); OutDir=$(TempOutputFolder)$(WebOutputFolder)\;Configuration=Release;" /> </Target> Any help would be great..! I know this can be done by other means but I would like to do this using the new VS 2010 way if possible

    Read the article

  • Windows server 2003 default administrator password

    - by Jason Baker
    Sorry if this is an overly simplistic question, but I'm a bit stuck here. :) I need a windows machine for me to do some programming for class. Since I have my Macbook with me everywhere I go, I figured that it would be easiest to install a vm. And since I can get a copy of Windows server 2k3 for free via dreamspark, I thought I'd try to do that. Here's what happened though: I installed windows server (disk one). When the system booted up, vmware automatically installed VMWare tools and prompted me to restart. There was also a prompt to start the installation of disc 2, but I figured it would be better to restart before doing that. When the machine came back up, I was prompted to log in as the administrator. The problem is that I wasn't prompted to make an administrator account or password. Is there a default password I can use? I've tried all the obvious ones (blank, password, etc) and googling, but I didn't come up with anything.

    Read the article

  • Calling a MVC2 partial view using jquery returns empty string problem

    - by Jason
    I have an issue where I have a partial view that returns some HTML to be displayed. Its called when something is clicked on the page using jquery. The problem is that no matter how I call it, i get back an empty string even though it reports success. This is happening to me using Chrome, going against my local machine. My controller looks like this: public ActionResult MyPartialView() { return PartialView(model); } I have tried jquery using .get(), .post() and .load() and all have the same results. Here is an example using .post(): $.post(url, function (data) { alert(data); }); The result always comes back as an empty string. I can navigate to the partial view in the browser manually and i get back the desired HTML. The URL I am using to call it I resolved fully so it looks like "http://localhost/controller/mypartialview" rather than using the relative path of "/controller/mypartialview" which I thought was the original problem. Any idea what may cause this?

    Read the article

  • using Silverlight in SCORM content

    - by Jason
    I'm building a LMS system using Sharepoint (WSS 3.0) with the Sharepoint Learning Kit (SLK). One of the requirements is to be able to host Silverlight content within the SCORM package. Has anyone done this before? I haven't been able to find much (anything) online that talks about how to do this. Most of the content tools that exist for SCORM are able to handle Flash, but I haven't come across anything that will do Silverlight. If all else fails, I'll try to manually build a SCORM package, but I'd really like to find some examples or howtos of doing this with Silverlight first. Has anyone done this before?

    Read the article

  • How to Link VS2010 Database Project and LINQ to SQL

    - by Jason
    As I am working with the new database projects in VS2010, and as I am learning LINQ to SQL, I am curious as to the best way to link the two groups of information so that when I update one, the other updates along with it. From my research here at SO, as well as in Google, it appears the general rule of thumb is: "Build the database, and then create your LINQ to SQL classes." Of course, if I make a change in my database, the LINQ to SQL doesn't update automatically and I have to do it by hand. This is fairly simple right now as my database is small, but I am curious if there is an easier way for this to happen. In addition, the LINQ to SQL tool is pretty nice. The ability to create tables, add associations, and even create inheritance is very simple. As my second question, I am curious as to whether or not VS2010 can work the other way - I design the database in the DBLM file and then link it back to my database project. I appreciate any help with either of these two questions. I'm really interested in making this as easy as possible to reduce errors during development and improve the speed at which changes can be made.

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >