Search Results

Search found 25397 results on 1016 pages for 'network map'.

Page 77/1016 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • google maps everytime fails to place some markers on the map

    - by Luca
    hello! im trying to place like 130/140 markers on a custom google map. i inject the map with jquery and gmaps (http://gmap.nurtext.de/) everytime, at random (not related to specific markers) a lots of markers are not shown. firebug report this error: a is null and this error comes from this file: http://maps.gstatic.com/intl/it_ALL/mapfiles/285c/maps2.api/main.js if i refresh the page...some other markers are "hidden" and other ones are shown. anyone had this problem/can help me or suggest another safe way to show all markers? thanks a lot! EDIT: this is how i inject the map and the markers (with a lots of address, but in this example only few) $(document).ready(function() { $("#container").gMap( { scrollwheel: false, maptype: G_PHYSICAL_MAP, icon: { image: "files/images/gmap_pin.png", iconsize: [32, 37], iconanchor: [32, 37], infowindowanchor: [12, 0] }, address: "Milano", zoom: 4, markers: [ { address: "Viale Certosa, Milano" }, { address: "Viale Ceccarini, Milano" }, { address: "Viale Italia, Milano" }, { address: "Via Rodi, Milano" }, ] }); });

    Read the article

  • Large number of Google Map Markers and IE6?

    - by Sivakanesh
    I'm working on an application that generates a large number of Google Map markers (2000 - 7000) via JSON. I'm also using MarkerCluster. It works quick on Chrome and FF but IE6 takes few minutes and just crashes the first time I try to zoom in. I'm not doing any more than just adding the markers to a map using JQuery & GMap API. So I looked at the following URL of the regular Google Map. http://maps.google.co.uk/maps?f=q&source=s_q&hl=en&q=hotel&sll=53.182996,-2.581787&sspn=1.494529,4.927368&ie=UTF8&split=1&rq=1&ev=p&hq=hotel&hnear=&ll=53.123702,-2.730103&spn=1.496594,4.927368&t=h&z=8 It shows a lot of tiny markers (~1000) and works fine on IE6. Do you have any ideas why this works and the markers added via the API struggles? Thanks

    Read the article

  • Understanding Ruby Enumerable#map (with more complex blocks)

    - by mstksg
    Let's say I have a function def odd_or_even n if n%2 == 0 return :even else return :odd end end And I had a simple enumerable array simple = [1,2,3,4,5] And I ran it through map, with my function, using a do-end block: simple.map do |n| odd_or_even(n) end # => [:odd,:even,:odd,:even,:odd] How could I do this without, say, defining the function in the first place? For example, # does not work simple.map do |n| if n%2 == 0 return :even else return :odd end end # Desired result: # => [:odd,:even,:odd,:even,:odd] is not valid ruby, and the compiler gets mad at me for even thinking about it. But how would I implement an equivalent sort of thing, that works?

    Read the article

  • How to use R-Tree for plotting large number of map markers on google maps

    - by Eeyore
    After searching SO and multiple articles I haven't found a solution to my problem. What I am trying to achieve is to load 20,000 markers on Google Maps. R-Tree seems like a good approach but it's only helpful when searching for points within the visible part of the map. When the map is zoomed out it will return all of the points and...crash the browser. There is also the problem with dragging the map and at the end of dragging re-running the query. I would like to know how I can use R-Tree and be able to achieve the all of the above.

    Read the article

  • Adding Street View controls (the two icons just above the +) to a Google Map (v3)

    - by AlexV
    It's probably something really simple, but I can't find it in the docs and I can't find a map with it to check it's source... I use version 3 of the API. I guess it's an something to add in myOptions? var latlng = new google.maps.LatLng(-34.397, 150.644); var myOptions = { zoom: 8, center: latlng, mapTypeId: google.maps.MapTypeId.ROADMAP } map = new google.maps.Map(document.getElementById('map_canvas'), myOptions); Currently I only have dragging controls and the zoom pane controls. I would like to have the two Street View controls icons too. If you want full source, I'm using this example as a base (how would you add Street View controls to this example?).

    Read the article

  • OpenLayers: Get resolution of map in a given projection (4326)

    - by David Pfeffer
    I'm using OpenLayers to display OpenStreetMap maps. (Though, I'd assume this should be general enough to work for any map product...) I'm displaying some very sophisticated vector overlays, and the amount and resolution of the features I'm returning from the server via GeoJSON to overlay has proven too much for many computers. What I'd like to do now instead is to only send data befitting the resolution of the current zoom. This should be relatively easy to do using the getResolution and calculateBounds methods on the Map object. calculateBounds returns a Bounds object that then can be transformed between the map projection and display projection. How do I transform the resolution into my desired projection (4326 in this case)?

    Read the article

  • EF 4.x generated entity classes (POCO) and Map files

    - by JBeckton
    I have an MVC 4 app that I am working on and using the code first implementation except I cheated a bit and created my database first then generated my entity classes (poco) from my database using the EF power tools (reverse engineer). I guess you can say I did database first method but I have no edmx file just the context class and my entity classes (poco) I have a few projects in the works using MVC and EF with pocos but just the one project I used the tool to generate my pocos from the database. My question is about the mapping files that get created when I generate my pocos using the tool. What is the purpose of these Map files? I figured the map files are needed when generating the db from the model like with the true code first method, in my case where I am using a tool to generate my model from the database do the map files have any influence on how my app uses the entity classes?

    Read the article

  • android, google map errors: BaseTileRequest, Server returned: 3

    - by Tung Mai Le
    I got some strange errors while develop some custom map overlays, anyone experience these? pls help, tks in advance. BaseTileRequest.readResponseData(BaseTileRequest.java:115) MapService$MapTileRequest.readResponseData(MapService.java:1473) MapService$MapTileRequest.readResponseData(MapService.java:1473) 09-17 00:53:25.933: WARN/System.err(32480): java.io.IOException: Server returned: 3 09-17 00:53:25.933: WARN/System.err(32480): at android_maps_conflict_avoidance.com.google.googlenav.map.BaseTileRequest.readResponseData(BaseTileRequest.java:115) 09-17 00:53:25.938: WARN/System.err(32480): at android_maps_conflict_avoidance.com.google.googlenav.map.MapService$MapTileRequest.readResponseData(MapService.java:1473) 09-17 00:53:25.938: WARN/System.err(32480): at android_maps_conflict_avoidance.com.google.googlenav.datarequest.DataRequestDispatcher.processDataRequest(DataRequestDispatcher.java:1117) 09-17 00:53:25.943: WARN/System.err(32480): at android_maps_conflict_avoidance.com.google.googlenav.datarequest.DataRequestDispatcher.serviceRequests(DataRequestDispatcher.java:994) 09-17 00:53:25.943: WARN/System.err(32480): at android_maps_conflict_avoidance.com.google.googlenav.datarequest.DataRequestDispatcher$DispatcherServer.run(DataRequestDispatcher.java:1702) 09-17 00:53:25.948: WARN/System.err(32480): at java.lang.Thread.run(Thread.java:856)

    Read the article

  • Google Map V3 Constants as Variables

    - by Beardy
    In the example below I am trying to update the map type depending on the value of the selected option type. Unfortunately it doesn't seem to load the maptype into the google.maps.MapTypeId which is frustrating. I have tried it as a string as well as var gmapsMapType = google.maps.MapTypeId.++maptype; and I feel I am missing something here. HTML <select id="maptype" name="maptype"> <option selected="selected" value="RoadMap">Road Map</option> <option value="Satellite">Satellite</option> <option value="Hybrid">Hybrid</option> <option value="Terrain">Terrain</option> </select> JQUERY var maptype = $('#maptype>option:selected').val().toUpperCase(); var gmapsMapType = google.maps.MapTypeId.+maptype; map.setMapTypeId(gmapsMapType); Help is appreciated.

    Read the article

  • Simplest one-to-many Map case in Hibernate doesn't work in MySQL

    - by Malvolio
    I think this is pretty much the simplest case for mapping a Map (that is, an associative array) of entities. @Entity @AccessType("field") class Member { @Id protected long id; @OneToMany(cascade = CascadeType.ALL, fetch=FetchType.LAZY) @MapKey(name = "name") private Map<String, Preferences> preferences = new HashMap<String, Preferences>(); } @Entity @AccessType("field") class Preferences { @ManyToOne Member member; @Column String name; @Column String value; } This looks like it should work, and it does, in HSQL. In MySQL, there are two problems: First, it insists that there be a table called Members_Preferences, as if this were a many-to-many relationship. Second, it just doesn't work: since it never populates Members_Preferences, it never retrieves the Preferences. [My theory is, since I only use HSQL in memory-mode, it automatically creates Members_Preferences and never really has to retrieve the preferences map. In any case, either Hibernate has a huge bug in it or I'm doing something wrong.]

    Read the article

  • AutoMapper determine what to map based on generic type

    - by Daz Lewis
    Hi, Is there a way to provide AutoMapper with just a source and based on the specified mapping for the type of that source automatically determine what to map to? So for example I have a type of Foo and I always want it mapped to Bar but at runtime my code can receive any one of a number of generic types. public T Add(T entity) { //List of mappings var mapList = new Dictionary<Type, Type> { {typeof (Foo), typeof (Bar)} {typeof (Widget), typeof (Sprocket)} }; //Based on the type of T determine what we map to...somehow! var t = mapList[entity.GetType()]; //What goes in ?? to ensure var in the case of Foo will be a Bar? var destination = AutoMapper.Mapper.Map<T, ??>(entity); } Any help is much appreciated.

    Read the article

  • Bugs in scroll-follow with Google map (Firefox and Safari)

    - by earlyriser
    A scroll follow effect is when a part of the web design is always visible, even when the window is scrolled. There are animated and static versions of this. The animated is ok http://robertomartinez.info/cobra/index.html But I prefer the fixed version, however I have some bugs: http://robertomartinez.info/cobra/index_fixed.html -In Firefox, when you scroll the page, you will see a kind of vertical cut in the lorem ipsum text (below the HERE indication). This is caused by the image tiles of the map, then if you drag and drop the map, the cut will appear in another side. -In Safari, when you scroll the page, the div follows, but the map images stay in the same position. Do you have solutions for theses issues? Thanks.

    Read the article

  • How do you search through a map?

    - by Jack Null
    I have a map: Map<String, String> ht = new HashMap(); and I would like to know how to search through it and find anything matching a particular string. And if it is a match store it into an arraylist. The map contains strings like this: 1,2,3,4,5,5,5 and the matching string would be 5. So for I have this: String match = "5"; ArrayList<String> result = new ArrayList<String>(); Enumeration num= ht.keys(); while (num.hasMoreElements()) { String number = (String) num.nextElement(); if(number.equals(match)) { result.add(number); } }

    Read the article

  • Using xsl:character-map on text nodes only

    - by jramos95
    I am trying to create a generic stylesheet that can convert all Latin characters in Unicode to uppercase ASCII characters. Using <xsl:character-map> works well except for one thing: namespaces. The character map converts all of my namespaces to upper case, which I do not want. Is there a way to utilize a character map to do what I want to all the other nodes while leaving the namespaces untouched? I see the disable-output-escaping attribute might be an option, but I haven't been able to make it work.

    Read the article

  • How do I map nested generics in NHibernate

    - by Gluip
    In NHibernate you can map generics like this <class name="Units.Parameter`1[System.Int32], Units" table="parameter_int" > </class> But how can I map a class like this? Set<T> where T is a Parameter<int> like this Set<Parameter<int>> My mapping hbm.xml looking like this fails <class name="Set`1[[Units.Parameter`1[System.Int32], Units]],Units" table="settable"/> I simplified my mappings a little to get my point accross very clearly. Basically I want NHibernate to map generic class which has has generic type parameter. Want I understand from googling around is that NHibernate is not able to parse the name to the correct type in TypeNameParser.Parse() which result in the following error when adding the mapping to the configuration System.ArgumentException: Exception of type 'System.ArgumentException' was thrown. Parameter name: typeName@31 Anybody found a way around this limitation?

    Read the article

  • problme with javascript map/object

    - by akshay
    I am using javascript map to store keys and values.Later on I check if specified key is present in map or not, but it sometimes gives correct result but sometimes it dont.I tried to print the map using console.log(mapname), it shows all keys, but i i try to check if some specified key is presnt sometimes it gives wrong answer. Am using following code: //following code is called n times in loop with different/same vales of x myMap : new Object(); var key = x ; //populated dynamically actually myMap[key] ="dummyval"; if(myIdMap[document.getElementById("abc").value.trim()]!=null) alert('present' ); else alert('not present'); What can be the possible problem?

    Read the article

  • Perl array and hash manipulation using map

    - by somebody
    I have the following test code use Data::Dumper; my $hash = { foo => 'bar', os => 'linux' }; my @keys = qw (foo os); my $extra = 'test'; my @final_array = (map {$hash->{$_}} @keys,$extra); print Dumper \@final_array; The output is $VAR1 = [ 'bar', 'linux', undef ]; Shouldn't the elements be "bar, linux, test"? Why is the last element undefined and how do I insert an element into @final_array? I know I can use the push function but is there a way to insert it on the same line as using the map command? Basically the manipulated array is meant to be used in an SQL command in the actual script and I want to avoid using extra variables before that and instead do something like: $sql->execute(map {$hash->{$_}} @keys,$extra);

    Read the article

  • C++ a map to a 2 dimensional vector

    - by user1701545
    I want to create a C++ map where key is, say, int and value is a 2-D vector of double: map myMap; suppose I filled it and now I would like to update the second vector mapped by each key (for example divide each element by 2). How would I access that vector iteratively? The "itr-second[0]" syntax in the statement below is obviously wrong. What would be the right syntax for that action? for(std::map<in, vector<vector<double> > > itr = myMap.begin(); itr != myMap.end();++itr) { for(int i = 0;i < itr->second[0].size();++i) { itr->second[0][i] /= 2; } } thanks, rubi

    Read the article

  • Java Map question

    - by user552961
    I have one Map that contains some names and numbers Map<String,Integer> abc = new TreeMap<String,Integer>(); It works fine. I can put some values in it but when I call it in different class it gives me wrong order. For example: I putted abc.put("a",1); abc.put("b",5); abc.put("c",3); some time it returns the order (b,a,c) and some time (a,c,b). What is wrong with it? Is there any step that I am missing when I call this map?

    Read the article

  • Adapting Map Iterators Using STL/Boost/Lambdas

    - by John Dibling
    Consider the following non-working code: typedef map<int, unsigned> mymap; mymap m; for( int i = 1; i < 5; ++i ) m[i] = i; // 'remove' all elements from map where .second < 3 remove(m.begin(), m.end(), bind2nd(less<int>(), 3)); I'm trying to remove elements from this map where .second < 3. This obviously isn't written correctly. How do I write this correctly using: Standard STL function objects & techniques Boost.Bind C++0x Lambdas I know I'm not eraseing the elements. Don't worry about that; I'm just simplifying the problem to solve.

    Read the article

  • In JPA, a Map of embeddable values, that have an embedded entity used as the key

    - by Schmuli
    I'm still new to JPA (and Hibernate, which I'm using as my provider), so maybe this just can't be done, but anyway... Consider the following code: @Entity class Root { @Id private long id; private String name; @ElementCollection private Map<ResourceType, Resource> resources; ... } @Entity class ResourceType { @Id private long id; private String name; } @Embeddable class Resource { private ResourceType resourceType; private long value; } In the database, there is a collection table, 'Root_resources', that stores the values of the map, but the resource type appears twice (actually, the resource type ID does), once as the KEY of the map, and once as part of the value. Is there a way, similar to, say, the @MapKey annotation, to indicate that the key is one of the columns of the value (i.e. embedded)?

    Read the article

  • Can't compile std::map sorting, why?

    - by Vincenzo
    This is my code: map<string, int> errs; struct Compare { bool operator() (map<string, int>::const_iterator l, map<string, int>::const_iterator r) { return ((*l).second < (*r).second); } } comp; sort(errs.begin(), errs.end(), comp); Can't compile. This is what I'm getting: no matching function for call to ‘sort(..’ Why so? Can anyone help? Thanks!

    Read the article

  • Proper network configuration for a KVM guest to be on the same networks at the host

    - by Steve Madsen
    I am running a Debian Linux server on Lenny. Within it, I am running another Lenny instance using KVM. Both servers are externally available, with public IPs, as well as a second interface with private IPs for the LAN. Everything works fine, except the VM sees all network traffic as originating from the host server. I suspect this might have something to do with the iptables-based firewall I'm running on the host. What I'd like to figure out is: how to I properly configure the host's networking such that all of these requirements are met? Both host and VMs have 2 network interfaces (public and private). Both host and VMs can be independently firewalled. Ideally, VM traffic does not have to traverse the host firewall. VMs see real remote IP addresses, not the host's. Currently, the host's network interfaces are configured as bridges. eth0 and eth1 do not have IP addresses assigned to them, but br0 and br1 do. /etc/network/interfaces on the host: # The primary network interface auto br1 iface br1 inet static address 24.123.138.34 netmask 255.255.255.248 network 24.123.138.32 broadcast 24.123.138.39 gateway 24.123.138.33 bridge_ports eth1 bridge_stp off auto br1:0 iface br1:0 inet static address 24.123.138.36 netmask 255.255.255.248 network 24.123.138.32 broadcast 24.123.138.39 # Internal network auto br0 iface br0 inet static address 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 bridge_ports eth0 bridge_stp off This is the libvirt/qemu configuration file for the VM: <domain type='kvm'> <name>apps</name> <uuid>636b6620-0949-bc88-3197-37153b88772e</uuid> <memory>393216</memory> <currentMemory>393216</currentMemory> <vcpu>1</vcpu> <os> <type arch='i686' machine='pc'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='cdrom'> <target dev='hdc' bus='ide'/> <readonly/> </disk> <disk type='file' device='disk'> <source file='/raid/kvm-images/apps.qcow2'/> <target dev='vda' bus='virtio'/> </disk> <interface type='bridge'> <mac address='54:52:00:27:5e:02'/> <source bridge='br0'/> <model type='virtio'/> </interface> <interface type='bridge'> <mac address='54:52:00:40:cc:7f'/> <source bridge='br1'/> <model type='virtio'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target port='0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> </devices> </domain> Along with the rest of my firewall rules, the firewalling script includes this command to pass packets destined for a KVM guest: # Allow bridged packets to pass (for KVM guests). iptables -A FORWARD -m physdev --physdev-is-bridged -j ACCEPT (Not applicable to this question, but a side-effect of my bridging configuration appears to be that I can't ever shut down cleanly. The kernel eventually tells me "unregister_netdevice: waiting for br1 to become free" and I have to hard reset the system. Maybe a sign I've done something dumb?)

    Read the article

  • Two network interfaces and two IP addresses on the same subnet in Linux

    - by Scott Duckworth
    I recently ran into a situation where I needed two IP addresses on the same subnet assigned to one Linux host so that we could run two SSL/TLS sites. My first approach was to use IP aliasing, e.g. using eth0:0, eth0:1, etc, but our network admins have some fairly strict settings in place for security that squashed this idea: They use DHCP snooping and normally don't allow static IP addresses. Static addressing is accomplished by using static DHCP entries, so the same MAC address always gets the same IP assignment. This feature can be disabled per switchport if you ask and you have a reason for it (thankfully I have a good relationship with the network guys and this isn't hard to do). With the DHCP snooping disabled on the switchport, they had to put in a rule on the switch that said MAC address X is allowed to have IP address Y. Unfortunately this had the side effect of also saying that MAC address X is ONLY allowed to have IP address Y. IP aliasing required that MAC address X was assigned two IP addresses, so this didn't work. There may have been a way around these issues on the switch configuration, but in an attempt to preserve good relations with the network admins I tried to find another way. Having two network interfaces seemed like the next logical step. Thankfully this Linux system is a virtual machine, so I was able to easily add a second network interface (without rebooting, I might add - pretty cool). A few keystrokes later I had two network interfaces up and running and both pulled IP addresses from DHCP. But then the problem came in: the network admins could see (on the switch) the ARP entry for both interfaces, but only the first network interface that I brought up would respond to pings or any sort of TCP or UDP traffic. After lots of digging and poking, here's what I came up with. It seems to work, but it also seems to be a lot of work for something that seems like it should be simple. Any alternate ideas out there? Step 1: Enable ARP filtering on all interfaces: # sysctl -w net.ipv4.conf.all.arp_filter=1 # echo "net.ipv4.conf.all.arp_filter = 1" >> /etc/sysctl.conf From the file networking/ip-sysctl.txt in the Linux kernel docs: arp_filter - BOOLEAN 1 - Allows you to have multiple network interfaces on the same subnet, and have the ARPs for each interface be answered based on whether or not the kernel would route a packet from the ARP'd IP out that interface (therefore you must use source based routing for this to work). In other words it allows control of which cards (usually 1) will respond to an arp request. 0 - (default) The kernel can respond to arp requests with addresses from other interfaces. This may seem wrong but it usually makes sense, because it increases the chance of successful communication. IP addresses are owned by the complete host on Linux, not by particular interfaces. Only for more complex setups like load- balancing, does this behaviour cause problems. arp_filter for the interface will be enabled if at least one of conf/{all,interface}/arp_filter is set to TRUE, it will be disabled otherwise Step 2: Implement source-based routing I basically just followed directions from http://lartc.org/howto/lartc.rpdb.multiple-links.html, although that page was written with a different goal in mind (dealing with two ISPs). Assume that the subnet is 10.0.0.0/24, the gateway is 10.0.0.1, the IP address for eth0 is 10.0.0.100, and the IP address for eth1 is 10.0.0.101. Define two new routing tables named eth0 and eth1 in /etc/iproute2/rt_tables: ... top of file omitted ... 1 eth0 2 eth1 Define the routes for these two tables: # ip route add default via 10.0.0.1 table eth0 # ip route add default via 10.0.0.1 table eth1 # ip route add 10.0.0.0/24 dev eth0 src 10.0.0.100 table eth0 # ip route add 10.0.0.0/24 dev eth1 src 10.0.0.101 table eth1 Define the rules for when to use the new routing tables: # ip rule add from 10.0.0.100 table eth0 # ip rule add from 10.0.0.101 table eth1 The main routing table was already taken care of by DHCP (and it's not even clear that its strictly necessary in this case), but it basically equates to this: # ip route add default via 10.0.0.1 dev eth0 # ip route add 130.127.48.0/23 dev eth0 src 10.0.0.100 # ip route add 130.127.48.0/23 dev eth1 src 10.0.0.101 And voila! Everything seems to work just fine. Sending pings to both IP addresses works fine. Sending pings from this system to other systems and forcing the ping to use a specific interface works fine (ping -I eth0 10.0.0.1, ping -I eth1 10.0.0.1). And most importantly, all TCP and UDP traffic to/from either IP address works as expected. So again, my question is: is there a better way to do this? This seems like a lot of work for a seemingly simple problem.

    Read the article

  • NetworkManager Applet shows no networks

    - by Kkelk
    I am "the friend" referred to in the questions here and here. I decided to come and ask a question myself, as I can still not connect to the wireless network. I downloaded Keryx, as suggested here, and managed to download the necessary package and its dependencies. When I attempted to install the packages on Ubuntu using Keryx, Keryx just closed. Following this, I installed the packages manually using dpkg, and as far as I can tell, this was successful: kieran@ubuntu:~$ cd /host/wifi/Keryx/keryx/projects/Kieran/packages kieran@ubuntu:/host/wifi/Keryx/keryx/projects/Kieran/packages$ sudo dpkg -i *.deb [sudo] password for kieran: Selecting previously deselected package bcmwl-kernel-source. (Reading database ... 118296 files and directories currently installed.) Unpacking bcmwl-kernel-source (from bcmwl-kernel-source_5.60.48.36+bdcom-0ubuntu5_i386.deb) ... Selecting previously deselected package dkms. Unpacking dkms (from dkms_2.1.1.2-3ubuntu1.1_all.deb) ... Selecting previously deselected package fakeroot. Unpacking fakeroot (from fakeroot_1.14.4-1ubuntu1_i386.deb) ... Selecting previously deselected package linux-image. Unpacking linux-image (from linux-image_2.6.35.22.23_i386.deb) ... Selecting previously deselected package menu. Unpacking menu (from menu_2.1.44ubuntu1_i386.deb) ... Selecting previously deselected package patch. Unpacking patch (from patch_2.6-2ubuntu1_i386.deb) ... Setting up fakeroot (1.14.4-1ubuntu1) ... update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode. Setting up linux-image (2.6.35.22.23) ... Setting up menu (2.1.44ubuntu1) ... Setting up patch (2.6-2ubuntu1) ... Processing triggers for man-db ... Setting up dkms (2.1.1.2-3ubuntu1.1) ... Setting up bcmwl-kernel-source (5.60.48.36+bdcom-0ubuntu5) ... Loading new bcmwl-5.60.48.36+bdcom DKMS files... First Installation: checking all kernels... Building only for 2.6.35-22-generic Building for architecture i686 Building initial module for 2.6.35-22-generic Done. wl.ko: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/2.6.35-22-generic/updates/dkms/ depmod..... DKMS: install Completed. update-initramfs: deferring update (trigger activated) Processing triggers for install-info ... Processing triggers for doc-base ... Processing 31 changed 1 added doc-base file(s)... Registering documents with scrollkeeper... Processing triggers for menu ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.35-22-generic Warning: No support for locale: en_GB.utf8 After rebooting, however, there were still no wireless networks in the NetworkManager Applet list. I opened the file /var/lib/NetworkManager/NetworkManager.state, and both NetworkEnabled and WirelessEnabled were set to True. While i'm very concious I may be asking a stupid question here, both my friend and I have nothing left to suggest, and as such - I would be very grateful for any answers as to how to get wireless working.

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >