Search Results

Search found 12214 results on 489 pages for 'video conversion'.

Page 477/489 | < Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >

  • I Know What I Did This Summer: Put Down Trex Decking

    - by thatjeffsmith
    If you’re wondering why I would bore everyone with my pictures and frequent status updates/tweets from the past week – it’s so I could document the process of refurbishing my deck, or what some would call a porch. When we go to take a vacation, buy a car, do anything – we also read personal blogs to get the real story. So, if you’re curious about what it takes to tackle this sort of project, read on. Skills/Equipment/Manpower We Possessed I took the old decking out by myself. I’m about 230 lbs, more than 6′ tall, and I’m pretty healthy. This took about 8 hours over two afternoons. Three of us put the deck back together. My wife has two engineering degrees. Her father also has two engineering degrees. Lots of brainpower available here. Also, her dad ran the public works department for a country for more than 20 years – so lots and lots of practical experience on hand. We had a compound mitre saw, a skilsaw, 2-3 crowbars, a framing hammer, 3 cordless drills, a corded drill, lots of sawhorses, a power sander, an angle grinder, a 10×10 Coleman canopy tent, a Ford F-150 pickup truck, outdoor speakers and lots of iTunes playlists, plenty of water and cold beer. Why We Did This Our deck was relatively young – it was built in 2005. However, the pressure treated boards must not have been adequately maintained before we bought the house. I had powerwashed the deck every other year and had it stained a few times. The boards just rotted. We’re going to be in the house for a long time, and we wanted something that would look nice and require little maintenance. More bad deck boards The deck boards were in bad shape Things We Learned The two most important things: The hidden fasteners have to be put in JUST right. Wedge them into the grooved board, then bend down the bit that is screwed down. We didn’t do this on the first board and couldn’t get the second board to fit nearly close enough. Watching the official TREX YouTube video helped immensely, and we should have watched that first. When pre-drilling holes for the boards that need screwed down – DO NOT pre-drill through the underlying framing wood. ONLY pre-drill through the TREX itself. The screw won’t seat in the board properly. Instead of sitting down flush with the board, it will stop at the top of the board and just spin. I had to call the the place that sold me the screws to find this out. So about a third of our screws look like crap. If it doesn’t look or feel right – stop everything and pick up your computer or your phone. It’s not right, and it will be much easier to stop and find out why. We didn’t do this, and now I’m going to see every screw that’s not flush with the boards and get upset. Oh well. The Process How much time did it take? Well I spent about 8 hours taking the deck apart. And then the 3 of use spent 8 hours the first day, 10 hours the second day, 8 hours the third, and another 6 hours on the fourth day. That’s like 104 man-hours. We supposedly saved four or five thousand dollars in labor, but don’t do the math here or you might get a bit upset. The main thing is that we got what we wanted, and there won’t be any surprises later. Now for some pictures… This 6”+ pry bar made the destruction of the old deck much easier Most of the joists, once exposed, were OK. This joist wasn’t sitting on ANYTHING before. We think a lazy gas person cut the board to sneak a gas line in. Awesome… These monster lag bolts had to be accounted for when putting in the additional framing The border pattern Sheri wanted to put in required a lot more framing. These were the first boards to go down – we screwed them in as there was no way to attach clips I sat, kicked in the boards, and then drilled these clips in – but my wife was able to go MUCH faster by using her hands to lock the boards in and drill on her knees. I liked locking the board in with my feet when they needed to be ‘encouraged’ to go straight. The first board took FOREVER to go in, but then when we got rolling, we were able to put in a 20′ board in less than 10 minutes. This was end of construction day #2 – we got much further than we thought we would. Ah, the dreaded last 10% – what to do here? Remember those ‘floating’ stringers? Yeah, we fixed that up a bit, too. My wife used a website (and her brain) to calculate exactly how to cut the stringers to give us the rise/run we needed with the proper clearance and all that jazz. The stairs with stringers and toe kicks – this was worth the effort It started raining on us as I screwed down the steps – this we managed to get our shade tent up on the deck to protect us from the rain too The stairs, finished Finished, mostly Good corner shot The top of the stairs Stairs, looking down Celebratory beer In Summary There are a few things we’re not happy with. I think we can fix them up – but later. I have a few things left to finish, rewire the lighting, get the gas grille put back in, and rehang some screen doors. I was expecting this to be a lot worse than it was. If I didn’t have the help, I would have never done it myself. But I’m glad that I did have that help and did do that project. It’s not often you get to spend that kind of qualify time with family and building cool stuff.

    Read the article

  • Feedback on meeting of the Linux User Group of Mauritius

    Once upon a time in a country far far away... Okay, actually it's not that bad but it has been a while since the last meeting of the Linux User Group of Mauritius (LUGM). There have been plans in the past but it never really happened. Finally, Selven took the opportunity and organised a new meetup with low administrative overhead, proper scheduling on alternative dates and a small attendee's survey on the preferred option. All the pre-work was nicely executed. First, I wasn't sure whether it would be possible to attend. Luckily I got some additional information, like children should come, too, and I was sold to this community gathering. According to other long-term members of the LUGM it was the first time 'ever' that a gathering was organised outside of Quatre Bornes, and I have to admit it was great! LUGM - user group meeting on the 15.06.2013 in L'Escalier Quick overview of Linux & the LUGM With a little bit of delay the LUGM meeting officially started with a quick overview and introduction to Linux presented by Avinash. During the session he told the audience that there had been quite some activity over the island some years ago but unfortunately it had been quiet during recent times. Of course, we also spoke about the acknowledged world dominance of Linux - thanks to Android - and the interesting possibilities for countries like Mauritius. It is known that a couple of public institutions have there back-end infrastructure running on Red Hat Linux systems but the presence on the desktop is still very low. Users are simply hanging on to Windows XP and older versions of Microsoft Office. Following the introduction of the LUGM Ajay joined into the session and it quickly changed into a panel discussion with lots of interesting questions and answers, sharing of first-hand experience either on the job or in private use of Linux, and a couple of ideas about how the LUGM could promote Linux a bit more in Mauritius. It was great to get an insight into other attendee's opinion and activities. Especially taking into consideration that I'm already using Linux since around 1996/97. Frankly speaking, I bought a SuSE 4.x distribution back in those days because I couldn't achieve certain tasks on Windows NT 4.0 without spending a fortune. OpenELEC Mediacenter Next, Selven gave us decent introduction on OpenELEC: Open Embedded Linux Entertainment Center (OpenELEC) is a small Linux distribution built from scratch as a platform to turn your computer into an XBMC media center. OpenELEC is designed to make your system boot fast, and the install is so easy that anyone can turn a blank PC into a media machine in less than 15 minutes. I didn't know about it until this presentation. In the past, I was mainly attached to Video Disk Recorder (VDR) as it allows the use of satellite receiver cards very easily. Hm, somehow I'm still missing my precious HTPC that I had to leave back in Germany years ago. It was great piece of hardware and software; self-built PC in a standard HiFi-sized (43cm) black desktop casing with 2 full-featured Hauppauge DVB-s cards, an old-fashioned Voodoo graphics card, WiFi card, Pioneer slot-in DVD drive, and fully remote controlled via infra-red thanks to Debian, VDR and LIRC. With EP Guide, scheduled recordings and general multimedia centre it offered all the necessary comfort in the living room, besides a Nintendo game console; actually a GameCube at that time... But I have to admit that putting OpenELEC on a Raspberry Pi would be a cool DIY project in the near future. LUGM - our next generation of linux users (15.06.2013) Project Evil Genius (PEG) Don't be scared of the paragraph header. Ish gave us a cool explanation why he named it PEG - Project Evil Genius; it's because of the time of the day when he was scripting down his ideas to be able to build, package and provide software applications to various Linux distributions. The main influence came from openSuSE but the platform didn't cater for his needs and ideas, so he started to work out something on his own. During his passionate session he also talked about the amazing experience he had due to other Linux users from all over the world. During the next couple of days Ish promised to put his script to GitHub... Looking forward to that. Check out Ish's personal blog over at hacklog.in. Highly recommended to read. Why India? Simply because the registration fees per year for an Indian domain are approximately 20 times less than for a Mauritian domain (.mu). Exploring the beach of L'Escalier af the meeting 'After-party' at the beach of L'Escalier Puh, after such interesting sessions, ideas around Linux and good conversation during the breaks and over lunch it was time for a little break-out. Selven suggested that we all should head down to the beach of L'Escalier and get some impressions of nature down here in the south of the island. Talking about 'beach' ;-) - absolutely not comparable to the white-sanded ones here in Flic en Flac... There are no lagoons down at the south coast of Mauriitus, and watching the breaking waves is a different experience and joy after all. Unfortunately, I was a little bit worried about the thoughtless littering at such a remote location. You have to drive on natural paths through the sugar cane fields and I was really shocked by the amount of rubbish lying around almost everywhere. Sad, really sad and it concurs with Yasir's recent article on the same topic. Resumé & outlook It was a great event. I met with new people, had some good conversations, and even my children enjoyed themselves the whole day. The location was well-chosen, enough space for each and everyone, parking spaces and even a playground for the children. Also, a big "Thank You" to Selven and his helpers for the organisation and preparation of lunch. I'm kind of sure that this was an exceptional meeting of LUGM and I'm really looking forward to the next gathering of Linux geeks. Hopefully, soon. All images are courtesy of Avinash Meetoo. More pictures are available on Flickr.

    Read the article

  • PASS Summit – looking back on my first time

    - by Fatherjack
      So I was lucky enough to get my first experience of PASS Summit this year and took some time beforehand to read some blogs and reference material to get an idea on what to do and how to get the best out of my visit. Having been to other conferences – technical and non-technical – I had a reasonable idea on the routine and what to expect in general. Here is a list of a few things that I have learned/remembered as the week has gone by. Wear comfortable shoes. This actually needs to be broadened to Take several pairs of comfortable shoes. You will be spending many many hours, for several days one after another. Having comfortable feet that can literally support you for the duration will make the week in general a whole lot better. Not only at the conference but getting to and from you could well be walking. In the evenings you will be walking around town and standing talking in various bars and clubs. Looking back, on some days I was on my feet for over 20 hours. Make friends. This is a given for the long term benefits it brings but there is also an immediate reward in being at a conference with a friend or two. Some events are bigger and more popular than others and some have the type of session that every single attendee will want to be in. This is great for those that get in but if you are in the bathroom or queuing for coffee and you miss out it sucks. Having a friend that can get in to a room and reserve you a seat is a great advantage to make sure you get the content that you want to see and still have the coffee that you need. Don’t go to every session you want to see This might sound counter intuitive and it relies on the sessions being recorded in some way to guarantee you don’t totally miss out. Both PASS Summit and SQL Bits sessions are recorded (summit is audio, SQLBits is video) and this means that if you get into a good conversation with someone over a coffee you don’t have to break it up to go to a session. Obviously there is a trade-off here and you need to decide on the tipping point for yourself but a conversation at a place like this could make a big difference to the next contract or employer you have or it might simply be great catching up with some friends you don’t see so often. Go to at least one session you don’t want to Again, this will seem to be contrary to normal logic but there is no reason why you shouldn’t learn about a part of SQL Server that isn’t part of your daily routine. Not only will you learn something new but you will also pick up on the feelings and attitudes of the people in the session. So, if you are a DBA, head off to a BI session and so on. You’ll hear BI speakers speaking to a BI audience and get to understand their point of view and reasoning for making the decisions they do. You will also appreciate the way that your decisions and instructions affect the way they have to work. This will help you a lot when you are on a project, working with multiple teams and make you all more productive. Socialise While you are at the conference venue, speak to people. Ask questions, be interested in whoever you are speaking to. You get chances to talk to new friends at breakfast, dinner and every break between sessions. The only people that might not talk to you would be speakers that are about to go and give a session, in most cases speakers like peace and quiet before going on stage. Other than that the people around you are just waiting for someone to talk to them so make the first move. There is a whole lot going on outside of the conference hours and you should make an effort to join in with some of this too. At karaoke evenings or just out for a quiet drink with a few of the people you meet at the conference. Either way, don’t be a recluse and hide in your room or be alone out in the town. Don’t talk to people Once again this sounds wrong but stay with me. I have spoken to a number of speakers since Summit 2013 finished and they have all mentioned the time it has taken them to move about the conference venue due to people stopping them for a chat or to ask a question. 45 minutes to walk from a session room to the speaker room in one case. Wow. While none of the speakers were upset about this sort of delay I think delegates should take the situation into account and possibly defer their question to an email or to a time when the person they want is clearly less in demand. Give them a chance to enjoy the conference in the same way that you are, they may actually want to go to a session or just have a rest after giving their session – talking for 75 minutes is hard work, taking an extra 45 minutes right after is unbelievable. I certainly hope that they get good feedback on their sessions and perhaps if you spoke to a speaker outside a session you can give them a mention in the ‘any other comments’ part of the feedback, just to convey your gratitude for them giving up their time and expertise for free. Say thank you I just mentioned giving the speakers a clear, visible ‘thank you’ in the feedback but there are plenty of people that help make any conference the success it is that would really appreciate hearing that their efforts are valued. People on the registration desk, volunteers giving schedule guidance and directions, people on the community zone are all volunteers giving their time to help you have the best experience possible. Send an email to PASS and convey your thoughts about the work that was done. Maybe you want to be a volunteer next time so you could enquire how you get into that position at the same time. This isn’t an exclusive list and you may agree or disagree with the points I have made, please add anything you think is good advice in the comments. I’d like to finish by saying a huge thank you to all the people involved in planning, facilitating and executing the PASS Summit 2013, it was an excellent event and I know many others think it was a totally worthwhile event to attend.

    Read the article

  • Computer Networks UNISA - Chap 10 &ndash; In Depth TCP/IP Networking

    - by MarkPearl
    After reading this section you should be able to Understand methods of network design unique to TCP/IP networks, including subnetting, CIDR, and address translation Explain the differences between public and private TCP/IP networks Describe protocols used between mail clients and mail servers, including SMTP, POP3, and IMAP4 Employ multiple TCP/IP utilities for network discovery and troubleshooting Designing TCP/IP-Based Networks The following sections explain how network and host information in an IPv4 address can be manipulated to subdivide networks into smaller segments. Subnetting Subnetting separates a network into multiple logically defined segments, or subnets. Networks are commonly subnetted according to geographic locations, departmental boundaries, or technology types. A network administrator might separate traffic to accomplish the following… Enhance security Improve performance Simplify troubleshooting The challenges of Classful Addressing in IPv4 (No subnetting) The simplest type of IPv4 is known as classful addressing (which was the Class A, Class B & Class C network addresses). Classful addressing has the following limitations. Restriction in the number of usable IPv4 addresses (class C would be limited to 254 addresses) Difficult to separate traffic from various parts of a network Because of the above reasons, subnetting was introduced. IPv4 Subnet Masks Subnetting depends on the use of subnet masks to identify how a network is subdivided. A subnet mask indicates where network information is located in an IPv4 address. The 1 in a subnet mask indicates that corresponding bits in the IPv4 address contain network information (likewise 0 indicates the opposite) Each network class is associated with a default subnet mask… Class A = 255.0.0.0 Class B = 255.255.0.0 Class C = 255.255.255.0 An example of calculating  the network ID for a particular device with a subnet mask is shown below.. IP Address = 199.34.89.127 Subnet Mask = 255.255.255.0 Resultant Network ID = 199.34.89.0 IPv4 Subnetting Techniques Subnetting breaks the rules of classful IPv4 addressing. Read page 490 for a detailed explanation Calculating IPv4 Subnets Read page 491 – 494 for an explanation Important… Subnetting only applies to the devices internal to your network. Everything external looks at the class of the IP address instead of the subnet network ID. This way, traffic directed to your network externally still knows where to go, and once it has entered your internal network it can then be prioritized and segmented. CIDR (classless Interdomain Routing) CIDR is also known as classless routing or supernetting. In CIDR conventional network class distinctions do not exist, a subnet boundary can move to the left, therefore generating more usable IP addresses on your network. A subnet created by moving the subnet boundary to the left is known as a supernet. With CIDR also came new shorthand for denoting the position of subnet boundaries known as CIDR notation or slash notation. CIDR notation takes the form of the network ID followed by a forward slash (/) followed by the number of bits that are used for the extended network prefix. To take advantage of classless routing, your networks routers must be able to interpret IP addresses that don;t adhere to conventional network class parameters. Routers that rely on older routing protocols (i.e. RIP) are not capable of interpreting classless IP addresses. Internet Gateways Gateways are a combination of software and hardware that enable two different network segments to exchange data. A gateway facilitates communication between different networks or subnets. Because on device cannot send data directly to a device on another subnet, a gateway must intercede and hand off the information. Every device on a TCP/IP based network has a default gateway (a gateway that first interprets its outbound requests to other subnets, and then interprets its inbound requests from other subnets). The internet contains a vast number of routers and gateways. If each gateway had to track addressing information for every other gateway on the Internet, it would be overtaxed. Instead, each handles only a relatively small amount of addressing information, which it uses to forward data to another gateway that knows more about the data’s destination. The gateways that make up the internet backbone are called core gateways. Address Translation An organizations default gateway can also be used to “hide” the organizations internal IP addresses and keep them from being recognized on a public network. A public network is one that any user may access with little or no restrictions. On private networks, hiding IP addresses allows network managers more flexibility in assigning addresses. Clients behind a gateway may use any IP addressing scheme, regardless of whether it is recognized as legitimate by the Internet authorities but as soon as those devices need to go on the internet, they must have legitimate IP addresses to exchange data. When a clients transmission reaches the default gateway, the gateway opens the IP datagram and replaces the client’s private IP address with an Internet recognized IP address. This process is known as NAT (Network Address Translation). TCP/IP Mail Services All Internet mail services rely on the same principles of mail delivery, storage, and pickup, though they may use different types of software to accomplish these functions. Email servers and clients communicate through special TCP/IP application layer protocols. These protocols, all of which operate on a variety of operating systems are discussed below… SMTP (Simple Mail transfer Protocol) The protocol responsible for moving messages from one mail server to another over TCP/IP based networks. SMTP belongs to the application layer of the ODI model and relies on TCP as its transport protocol. Operates from port 25 on the SMTP server Simple sub-protocol, incapable of doing anything more than transporting mail or holding it in a queue MIME (Multipurpose Internet Mail Extensions) The standard message format specified by SMTP allows for lines that contain no more than 1000 ascii characters meaning if you relied solely on SMTP you would have very short messages and nothing like pictures included in an email. MIME us a standard for encoding and interpreting binary files, images, video, and non-ascii character sets within an email message. MIME identifies each element of a mail message according to content type. MIME does not replace SMTP but works in conjunction with it. Most modern email clients and servers support MIME POP (Post Office Protocol) POP is an application layer protocol used to retrieve messages from a mail server POP3 relies on TCP and operates over port 110 With POP3 mail is delivered and stored on a mail server until it is downloaded by a user Disadvantage of POP3 is that it typically does not allow users to save their messages on the server because of this IMAP is sometimes used IMAP (Internet Message Access Protocol) IMAP is a retrieval protocol that was developed as a more sophisticated alternative to POP3 The single biggest advantage IMAP4 has over POP3 is that users can store messages on the mail server, rather than having to continually download them Users can retrieve all or only a portion of any mail message Users can review their messages and delete them while the messages remain on the server Users can create sophisticated methods of organizing messages on the server Users can share a mailbox in a central location Disadvantages of IMAP are typically related to the fact that it requires more storage space on the server. Additional TCP/IP Utilities Nearly all TCP/IP utilities can be accessed from the command prompt on any type of server or client running TCP/IP. The syntaxt may differ depending on the OS of the client. Below is a list of additional TCP/IP utilities – research their use on your own! Ipconfig (Windows) & Ifconfig (Linux) Netstat Nbtstat Hostname, Host & Nslookup Dig (Linux) Whois (Linux) Traceroute (Tracert) Mtr (my traceroute) Route

    Read the article

  • Ubuntu 12.04 wireless (wifi) not working, can not upgrade to 12.10, touchpad gestures not working. What to do?

    - by Ritwik
    I installed ubuntu 12.04 LTS 3 days ago and since then wireless feature and touchpad gestures are not working. Tried everything on internet but still unsuccessful. I cant upgrade to ubuntu 12.10. These are the following comments I tried. Please help me. EDIT: just realized usb 3.0 is also not working. COMMAND lsb_release -r OUTPUT ----------------------------------------------------------------- Release: 12.04 ----------------------------------------------------------------- COMMAND lspci OUTPUT ------------------------------------------------------------------ 00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor DRAM Controller (rev 06) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06) 00:01.1 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x8 Controller (rev 06) 00:02.0 VGA compatible controller: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller (rev 06) 00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06) 00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 05) 00:16.0 Communication controller: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 (rev d5) 00:1c.1 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #2 (rev d5) 00:1c.2 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #3 (rev d5) 00:1d.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM86 Express LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05) 00:1f.3 SMBus: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller (rev 05) 07:00.0 3D controller: NVIDIA Corporation GF117M [GeForce 610M/710M / GT 620M/625M/630M/720M] (rev a1) 08:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 07) 09:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5229 PCI Express Card Reader (rev 01) 0f:00.0 Network controller: Qualcomm Atheros QCA9565 / AR9565 Wireless Network Adapter (rev 01) ------------------------------------------------------------------ COMMAND sudo apt-get install linux-backports-modules-wireless-lucid-generic OUTPUT ------------------------------------------------------------------- Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-backports-modules-wireless-lucid-generic ------------------------------------------------------------------- COMMAND cat /etc/lsb-release; uname -a OUTPUT ------------------------------------------------------------------- DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.5 LTS" Linux ritwik-PC 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:46:11 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux ------------------------------------------------------------------- COMMAND lspci -nnk | grep -iA2 net OUTPUT ------------------------------------------------------------------- 08:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 07) Subsystem: Hewlett-Packard Company Device [103c:225d] Kernel driver in use: r8169 -- 0f:00.0 Network controller [0280]: Qualcomm Atheros QCA9565 / AR9565 Wireless Network Adapter [168c:0036] (rev 01) Subsystem: Hewlett-Packard Company Device [103c:217f] ------------------------------------------------------------------- COMMAND lsusb OUTPUT ------------------------------------------------------------------- Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:8008 Intel Corp. Bus 002 Device 002: ID 8087:8000 Intel Corp. ------------------------------------------------------------------- COMMAND iwconfig OUTPUT ------------------------------------------------------------------- lo no wireless extensions. eth0 no wireless extensions. ------------------------------------------------------------------- COMMAND rfkill list all OUTPUT ------------------------------------------------------------------- 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no 1: hp-bluetooth: Bluetooth Soft blocked: no Hard blocked: no ------------------------------------------------------------------- COMMAND lsmod OUTPUT ------------------------------------------------------------------- Module Size Used by snd_hda_codec_realtek 224215 1 bnep 18281 2 rfcomm 47604 0 bluetooth 180113 10 bnep,rfcomm parport_pc 32866 0 ppdev 17113 0 nls_iso8859_1 12713 1 nls_cp437 16991 1 vfat 17585 1 fat 61512 1 vfat snd_hda_intel 33719 3 snd_hda_codec 127706 2 snd_hda_codec_realtek,snd_hda_intel snd_hwdep 17764 1 snd_hda_codec snd_pcm 97275 2 snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61929 2 snd_seq_midi,snd_seq_midi_event nouveau 775039 0 joydev 17693 0 snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq ttm 76949 1 nouveau uvcvideo 72627 0 snd 79041 15 snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device videodev 98259 1 uvcvideo drm_kms_helper 46978 1 nouveau psmouse 98051 0 drm 241971 3 nouveau,ttm,drm_kms_helper i2c_algo_bit 13423 1 nouveau soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm v4l2_compat_ioctl32 17128 1 videodev hp_wmi 18092 0 serio_raw 13211 0 sparse_keymap 13890 1 hp_wmi mxm_wmi 13021 1 nouveau video 19651 1 nouveau wmi 19256 2 hp_wmi,mxm_wmi mac_hid 13253 0 lp 17799 0 parport 46562 3 parport_pc,ppdev,lp r8169 62190 0 ------------------------------------------------------------------- COMMAND sudo su modprobe -v ath9k OUTPUT ------------------------------------------------------------------- insmod /lib/modules/3.2.0-67-generic/kernel/net/wireless/cfg80211.ko insmod /lib/modules/3.2.0-67-generic/kernel/drivers/net/wireless/ath/ath.ko insmod /lib/modules/3.2.0-67-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_hw.ko insmod /lib/modules/3.2.0-67-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_common.ko insmod /lib/modules/3.2.0-67-generic/kernel/net/mac80211/mac80211.ko insmod /lib/modules/3.2.0-67-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k.ko ------------------------------------------------------------------- COMMAND do-release-upgrade OUTPUT ------------------------------------------------------------------- Err Upgrade tool signature 404 Not Found [IP: 91.189.88.149 80] Err Upgrade tool 404 Not Found [IP: 91.189.88.149 80] Fetched 0 B in 0s (0 B/s) WARNING:root:file 'quantal.tar.gz.gpg' missing Failed to fetch Fetching the upgrade failed. There may be a network problem. ------------------------------------------------------------------- COMMAND sudo modprobe ath9k dmesg | grep ath9k NO OUTPUT FOR THEM COMMAND dmesg | grep -e ath -e 80211 OUTPUT ------------------------------------------------------------------- [ 13.232372] type=1400 audit(1408867538.399:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=975 comm="apparmor_parser" [ 13.232615] type=1400 audit(1408867538.399:10): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=975 comm="apparmor_parser" [ 15.186599] ath3k: probe of 3-4:1.0 failed with error -110 [ 15.186635] usbcore: registered new interface driver ath3k [ 88.219329] cfg80211: Calling CRDA to update world regulatory domain [ 88.351665] cfg80211: World regulatory domain updated: [ 88.351667] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 88.351670] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 88.351671] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 88.351673] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 88.351674] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 88.351675] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) ------------------------------------------------------------------- COMMAND sudo apt-get install touchpad-indicator OUTPUT ------------------------------------------------------------------- Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: gir1.2-gconf-2.0 python-pyudev Suggested packages: python-qt4 python-pyside.qtcore The following NEW packages will be installed: gir1.2-gconf-2.0 python-pyudev touchpad-indicator 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded. Need to get 84.1 kB of archives. After this operation, 1,136 kB of additional disk space will be used. Do you want to continue [Y/n]? Y Get:1 http://ppa.launchpad.net/atareao/atareao/ubuntu/ precise/main touchpad-indicator all 0.9.3.12-1ubuntu1 [46.5 kB] Get:2 http://archive.ubuntu.com/ubuntu/ precise/main gir1.2-gconf-2.0 amd64 3.2.5-0ubuntu2 [7,098 B] Get:3 http://archive.ubuntu.com/ubuntu/ precise/main python-pyudev all 0.13-1 [30.5 kB] Fetched 84.1 kB in 2s (31.6 kB/s) Selecting previously unselected package gir1.2-gconf-2.0. (Reading database ... 169322 files and directories currently installed.) Unpacking gir1.2-gconf-2.0 (from .../gir1.2-gconf-2.0_3.2.5-0ubuntu2_amd64.deb) ... Selecting previously unselected package python-pyudev. Unpacking python-pyudev (from .../python-pyudev_0.13-1_all.deb) ... Selecting previously unselected package touchpad-indicator. Unpacking touchpad-indicator (from .../touchpad-indicator_0.9.3.12-1ubuntu1_all.deb) ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Processing triggers for software-center ... INFO:softwarecenter.db.update:no translation information in database needed Setting up gir1.2-gconf-2.0 (3.2.5-0ubuntu2) ... Setting up python-pyudev (0.13-1) ... Setting up touchpad-indicator (0.9.3.12-1ubuntu1) ... ------------------------------------------------------------------- Not able to find ( drivers/net/wireless/ath/ath9k/hw.c ) or ( drivers/net/wireless/ath/ath9k/hw.h )

    Read the article

  • Screenshot Tour: Ubuntu Touch 14.04 on a Nexus 7

    - by Chris Hoffman
    Ubuntu 14.04 LTS will “form the basis of the first commercially available Ubuntu tablets,” according to Canonical. We installed Ubuntu Touch 14.04 on our own hardware to see what those tablets will be like. We don’t recommend installing this yourself, as it’s still not a polished, complete experience. We’re using “Ubuntu Touch” as shorthand here — apparently this project’s new name is “Ubuntu For Devices.” The Welcome Screen Ubuntu’s touch interface is all about edge swipes and hidden interface elements — it has a lot in common with Windows 8, actually. You’ll see the welcome screen when you boot up or unlock a Ubuntu tablet or phone. If you have new emails, text messages, or other information, it will appear on this screen along with the time and date. If you don’t, you’ll just see a message saying “No data sources available.” The Dash Swipe in from the right edge of the welcome screen to access the Dash, or home screen. This is actually very similar to the Dash on Ubuntu’s Unity desktop. This isn’t a surprise — Canonical wants the desktop and touch versions of Ubuntu to use the same code. In the future, the desktop and touch versions of Ubuntu will use the same version of Unity and Unity will adjust its interface depending on what type of device your’e using. Here you’ll find apps you have installed and apps available to install. Tap an installed app to launch it or tap an available app to view more details and install it. Tap the My apps or Available headings to view a complete list of apps you have installed or apps you can install. Tap the Search box at the top of the screen to start searching — this is how you’d search for new apps to install. As you’d expect, a touch keyboard appears when you tap in the Search field or any other text field. The launcher isn’t just for apps. Tap the Apps heading at the top of the screen and you’ll see hidden text appear — Music, Video, and Scopes. This hidden navigation is used throughout Ubuntu’s different apps and can be easy to miss at first. Swipe to the left or right to move between these screens. These screens are also similar to the different panels in Unity on the desktop. The Scopes section allows you to view different search scopes you have installed. These are used to search different sources when you start a search from the Dash. Search from the Music or Videos scopes to search for local media files on your device or media files online. For example, searching in the Music scope will show you music results from Grooveshark by default. Navigating Ubuntu Touch Swipe in from the left edge anywhere on the system to open the launcher, a bar with shortcuts to apps. This launcher is very similar to the launcher on the left of Ubuntu’s Unity desktop — that’s the whole idea, after all. Once you’ve opened an app, you can leave the app by swiping in from the left. The launcher will appear — keep moving your finger towards the right edge of teh screen. This will swipe the current app off the screen, taking you back to the Dash. Once back on the Dash, you’ll see your open apps represented as thumbnails under Recent. Tap a thumbnail here to go back to a running app. To remove an app from here, long-press it and tap the X button that appears. Swipe in from the right edge in any app to quickly switch between recent apps. Swipe in from the right edge and hold your finger down to reveal an application switcher that shows all your recent apps and lets you choose between them. Swipe down from the top of the screen to access the indicator panel. Here you can connect to Wi-Fi networks, view upcoming events, control GPS and Bluetooth hardware, adjust sound settings, see incoming messages, and more. This panel is for quick access to hardware settings and notifications, just like the indicators on Ubuntu’s Unity desktop. The Apps System settings not included in the pull-down panel are available in the System Settings app. To access it, tap My apps on the Dash and tap System Settings, search for the System Settings app, or open the launcher bar and tap the settings icon. The settings here a bit limited compared to other operating systems, but many of the important options are available here. You can add Evernote, Ubuntu One, Twitter, Facebook, and Google accounts from here. A free Ubuntu One account is mandatory for downloading and updating apps. A Google account can be used to sync contacts and calendar events. Some apps on Ubuntu are native apps, while many are web apps. For example, the Twitter, Gmail, Amazon, Facebook, and eBay apps included by default are all web apps that open each service’s mobile website as an app. Other applications, such as the Weather, Calendar, Dialer, Calculator, and Notes apps are native applications. Theoretically, both types of apps will be able to scale to different screen resolutions. Ubuntu Touch and Ubuntu desktop may one day share the same apps, which will adapt to different display sizes and input methods. Like Windows 8 apps, Ubuntu apps hide interface elements by default, providing you with a full-screen view of the content. Swipe up from the bottom of an app’s screen to view its interface elements. For example, swiping up from the bottom of the Web Browser app reveals Back, Forward, and Refresh buttons, along with an address bar and Activity button so you can view current and recent web pages. Swipe up even more from the bottom and you’ll see a button hovering in the middle of the app. Tap the button and you’ll see many more settings. This is an overflow area for application options and functions that can’t fit on the navigation bar. The Terminal app has a few surprising Easter eggs in this panel, including a “Hack into the NSA” option. Tap it and the following text will appear in the terminal: That’s not very nice, now tracing your location . . . . . . . . . . . .Trace failed You got away this time, but don’t try again. We’d expect to see such Easter eggs disappear before Ubuntu Touch actually ships on real devices. Ubuntu Touch has come a long way, but it’s still not something you want to use today. For example, it doesn’t even have a built-in email client — you’ll have to us your email service’s mobile website. Few apps are available, and many of the ones that are are just mobile websites. It’s not a polished operating system intended for normal users yet — it’s more of a preview for developers and device manufacturers. If you really want to try it yourself, you can install it on a Wi-Fi Nexus 7 (2013), Nexus 10, or Nexus 4 device. Follow Ubuntu’s installation instructions here.

    Read the article

  • Different Not Automatically Implies Better

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/11/05/154556.aspxRecently I was digging deeper why some WCF hosted workflow application did consume quite a lot of memory although it did basically only load a xaml workflow. The first tool of choice is Process Explorer or even better Process Hacker (has more options and the best feature copy&paste does work). The three most important numbers of a process with regards to memory are Working Set, Private Working Set and Private Bytes. Working set is the currently consumed physical memory (parts can be shared between processes e.g. loaded dlls which are read only) Private Working Set is the physical memory needed by this process which is not shareable Private Bytes is the number of non shareable which is only visible in the current process (e.g. all new, malloc, VirtualAlloc calls do create private bytes) When you have a bigger workflow it can consume under 64 bit easily 500MB for a 1-2 MB xaml file. This does not look very scalable. Under 64 bit the issue is excessive private bytes consumption and not the managed heap. The picture is quite different for 32 bit which looks a bit strange but it seems that the hosted VB compiler is a lot less memory hungry under 32 bit. I did try to repro the issue with a medium sized xaml file (400KB) which does contain 1000 variables and 1000 if which can be represented by C# code like this: string Var1; string Var2; ... string Var1000; if (!String.IsNullOrEmpty(Var1) ) { Console.WriteLine(“Var1”); } if (!String.IsNullOrEmpty(Var2) ) { Console.WriteLine(“Var2”); } ....   Since WF is based on VB.NET expressions you are bound to the hosted VB.NET compiler which does result in (x64) 140 MB of private bytes which is ca. 140 KB for each if clause which is quite a lot if you think about the actually present functionality. But there is hope. .NET 4.5 does allow now C# expressions for WF which is a major step forward for all C# lovers. I did create some simple patcher to “cross compile” my xaml to C# expressions. Lets look at the result: C# Expressions VB Expressions x86 x86 On my home machine I have only 32 bit which gives you quite exactly half of the memory consumption under 64 bit. C# expressions are 10 times more memory hungry than VB.NET expressions! I wanted to do more with less memory but instead it did consume a magnitude more memory. That is surprising to say the least. The workflow does initialize in about the same time under x64 and x86 where the VB code does it in 2s whereas the C# version needs 18s. Also nearly ten times slower. That is a too high price to pay for any bigger sized xaml workflow to convert from VB.NET to C# expressions. If I do reduce the number of expressions to 500 then it does need 400MB which is about half of the memory. It seems that the cost per if does rise linear with the number of total expressions in a xaml workflow.  Expression Language Cost per IF Startup Time C# 1000 Ifs x64 1,5 MB 18s C# 500 Ifs x64 750 KB 9s VB 1000 Ifs x64 140 KB 2s VB 500 Ifs x64 70 KB 1s Now we can directly compare two MS implementations. It is clear that the VB.NET compiler uses the same underlying structure but it has much higher offset compared to the highly inefficient C# expression compiler. I have filed a connect bug here with a harsher wording about recent advances in memory consumption. The funniest thing is that one MS employee did give an Azure AppFabric demo around early 2011 which was so slow that he needed to investigate with xperf. He was after startup time and the call stacks with regards to VB.NET expression compilation were remarkably similar. In fact I only found this post by googling for parts of my call stacks. … “C# expressions will be coming soon to WF, and that will have different performance characteristics than VB” … What did he know Jan 2011 what I did no know until today? ;-). He knew that C# expression will come but that they will not be automatically have better footprint. It is about time to fix that. In its current state C# expressions are not usable for bigger workflows. That also explains the headline for today. You can cheat startup time by prestarting workflows so that the demo looks nice and snappy but it does hurt scalability a lot since you do need much more memory than necessary. I did find the stacks by enabling virtual allocation tracking within XPerf which is still the best tool out there. But first you need to look at your process to check where the memory is hiding: For the C# Expression compiler you do not need xperf. You can directly dump the managed heap and check with a profiler of your choice. But if the allocations are happening on the Private Data ( VirtualAlloc ) you can find it with xperf. There is a nice video on channel 9 explaining VirtualAlloc tracking it in greater detail. If your data allocations are on the Heap it does mean that the C/C++ runtime did create a heap for you where all malloc, new calls do allocate from it. You can enable heap tracing with xperf and full call stack support as well which is doable via xperf like it is shown also on channel 9. Or you can use WPRUI directly: To make “Heap Usage” it work you need to set for your executable the tracing flags (before you start it). For example devenv.exe HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\devenv.exe DWORD TracingFlags 1 Do not forget to disable it after you did complete profiling the process or it will impact the startup time quite a lot. You can with xperf attach directly to a running process and collect heap allocation information from a gone wild process. Very handy if you need to find out what a process was doing which has arrived in a funny state. “VirtualAlloc usage” does work without explicitly enabling stuff for a specific process and is always on machine wide. I had issues on my Windows 7 machines with the call stack collection and the latest Windows 8.1 Performance Toolkit. I was told that WPA from Windows 8.0 should work fine but I do not want to downgrade.

    Read the article

  • Optimizing a lot of Scanner.findWithinHorizon(pattern, 0) calls

    - by darvids0n
    I'm building a process which extracts data from 6 csv-style files and two poorly laid out .txt reports and builds output CSVs, and I'm fully aware that there's going to be some overhead searching through all that whitespace thousands of times, but I never anticipated converting about about 50,000 records would take 12 hours. Excerpt of my manual matching code (I know it's horrible that I use lists of tokens like that, but it was the best thing I could think of): public static String lookup(List<String> tokensBefore, List<String> tokensAfter) { String result = null; while(_match(tokensBefore)) { // block until all input is read if(id.hasNext()) { result = id.next(); // capture the next token that matches if(_matchImmediate(tokensAfter)) // try to match tokensAfter to this result return result; } else return null; // end of file; no match } return null; // no matches } private static boolean _match(List<String> tokens) { return _match(tokens, true); } private static boolean _match(List<String> tokens, boolean block) { if(tokens != null && !tokens.isEmpty()) { if(id.findWithinHorizon(tokens.get(0), 0) == null) return false; for(int i = 1; i <= tokens.size(); i++) { if (i == tokens.size()) { // matches all tokens return true; } else if(id.hasNext() && !id.next().matches(tokens.get(i))) { break; // break to blocking behaviour } } } else { return true; // empty list always matches } if(block) return _match(tokens); // loop until we find something or nothing else return false; // return after just one attempted match } private static boolean _matchImmediate(List<String> tokens) { if(tokens != null) { for(int i = 0; i <= tokens.size(); i++) { if (i == tokens.size()) { // matches all tokens return true; } else if(!id.hasNext() || !id.next().matches(tokens.get(i))) { return false; // doesn't match, or end of file } } return false; // we have some serious problems if this ever gets called } else { return true; // empty list always matches } } Basically wondering how I would work in an efficient string search (Boyer-Moore or similar). My Scanner id is scanning a java.util.String, figured buffering it to memory would reduce I/O since the search here is being performed thousands of times on a relatively small file. The performance increase compared to scanning a BufferedReader(FileReader(File)) was probably less than 1%, the process still looks to be taking a LONG time. I've also traced execution and the slowness of my overall conversion process is definitely between the first and last like of the lookup method. In fact, so much so that I ran a shortcut process to count the number of occurrences of various identifiers in the .csv-style files (I use 2 lookup methods, this is just one of them) and the process completed indexing approx 4 different identifiers for 50,000 records in less than a minute. Compared to 12 hours, that's instant. Some notes (updated): I don't necessarily need the pattern-matching behaviour, I only get the first field of a line of text so I need to match line breaks or use Scanner.nextLine(). All ID numbers I need start at position 0 of a line and run through til the first block of whitespace, after which is the name of the corresponding object. I would ideally want to return a String, not an int locating the line number or start position of the result, but if it's faster then it will still work just fine. If an int is being returned, however, then I would now have to seek to that line again just to get the ID; storing the ID of every line that is searched sounds like a way around that. Anything to help me out, even if it saves 1ms per search, will help, so all input is appreciated. Thankyou! Usage scenario 1: I have a list of objects in file A, who in the old-style system have an id number which is not in file A. It is, however, POSSIBLY in another csv-style file (file B) or possibly still in a .txt report (file C) which each also contain a bunch of other information which is not useful here, and so file B needs to be searched through for the object's full name (1 token since it would reside within the second column of any given line), and then the first column should be the ID number. If that doesn't work, we then have to split the search token by whitespace into separate tokens before doing a search of file C for those tokens as well. Generalised code: String field; for (/* each record in file A */) { /* construct the rest of this object from file A info */ // now to find the ID, if we can List<String> objectName = new ArrayList<String>(1); objectName.add(Pattern.quote(thisObject.fullName)); field = lookup(objectSearchToken, objectName); // search file B if(field == null) // not found in file B { lookupReset(false); // initialise scanner to check file C objectName.clear(); // not using the full name String[] tokens = thisObject.fullName.split(id.delimiter().pattern()); for(String s : tokens) objectName.add(Pattern.quote(s)); field = lookup(objectSearchToken, objectName); // search file C lookupReset(true); // back to file B } else { /* found it, file B specific processing here */ } if(field != null) // found it in B or C thisObject.ID = field; } The objectName tokens are all uppercase words with possible hyphens or apostrophes in them, separated by spaces. Much like a person's name. As per a comment, I will pre-compile the regex for my objectSearchToken, which is just [\r\n]+. What's ending up happening in file C is, every single line is being checked, even the 95% of lines which don't contain an ID number and object name at the start. Would it be quicker to use ^[\r\n]+.*(objectname) instead of two separate regexes? It may reduce the number of _match executions. The more general case of that would be, concatenate all tokensBefore with all tokensAfter, and put a .* in the middle. It would need to be matching backwards through the file though, otherwise it would match the correct line but with a huge .* block in the middle with lots of lines. The above situation could be resolved if I could get java.util.Scanner to return the token previous to the current one after a call to findWithinHorizon. I have another usage scenario. Will put it up asap.

    Read the article

  • How to remove a node based on contents of a subnode and then convert to CSV with headers intact

    - by Morris Cox
    I'm downloading a 10MB zipped XML file with wget, unzipping it to 40MB, trying to weed out test and expired entries (if the text of a certain node is "This is a test opportunity. Please DO NOT apply!" or if is in the past), and then convert to CSV. However, I get this error: PHP Notice: Array to string conversion in /home/morris/projects/grantsgov/xml2csv.php on line 46 I get Array errors because some entries in the XML file have more than one occurrence of a node (different text contents). The test entries are still present. Contents of grantsgov.sh: #!/bin/bash wget --clobber "http://www.grants.gov/search/downloadXML.do;jsessionid=1n7GNpNF2tKZnRGLyQqf7Tl32hFJ1zndhfQpLrJJD11TTNzWMwDy!368676377?fname=GrantsDBExtract$(date +'%Y%m%d').zip" -O GrantsDBExtract$(date +"%Y%m%d").zip unzip GrantsDBExtract$(date +"%Y%m%d").zip php xml2csv.php zip GrantsDBExtracted$(date +"%Y%m%d").zip GrantsDBExtracted$(date +"%Y%m%d").csv Contents of xml2csv.php: <? $current_date = date("Ymd"); //$getfile=fopen("http://www.grants.gov/search/downloadXML.do;jsessionid=GqJNNmdLyJyMlsQqTzS2KdzgT5NMdhPp0QhG946JTmHzRltNTpMQ!368676377?fname=GrantsDBExtract$current_date.zip", "r"); $zip = zip_open("GrantsDBExtract$current_date.zip"); if(is_resource($zip)) { while ($zip_entry = zip_read($zip)) { $fp = fopen("./".zip_entry_name($zip_entry), "w"); if (zip_entry_open($zip, $zip_entry, "r")) { $buf = zip_entry_read($zip_entry, zip_entry_filesize($zip_entry)); fwrite($fp,"$buf"); zip_entry_close($zip_entry); fclose($fp); } } zip_close($zip); } // For future potential use $xml = new DOMDocument('1.0', 'ascii'); $xpath = new DOMXpath($xml); $xslFile = "feddata.xsl"; $filexml="GrantsDBExtract$current_date.xml"; if (file_exists($filexml)) { $xml = simplexml_load_file($filexml); echo "Loaded $filexml\n"; $xslt = new XSLTProcessor(); $xsl = new DOMDocument(); //$XSL->load('feddata.xsl', LIBXML_NOCDATA); $xsl->load($xslFile, LIBXML_NOCDATA); $xslt->importStylesheet($xsl); $xslt->transformToXML($xml); $f = fopen("GrantsDBExtracted$current_date.csv", 'w'); // create the CSV header row on the first time here $first = FALSE; $fields = array(); foreach($record as $key => $value) { $fields[] = $key; } fwrite($f,implode(";",$fields)."\n"); foreach ($xml->FundingOppSynopsis as $fos) { fputcsv($f, get_object_vars($fos),',','"'); } } fclose($f); } else { exit('Failed to open file.'); } ?> Contents of feddata.xsl: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml"/> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates select="node()|@*"/> </xsl:copy> </xsl:template> <xsl:template match="FundingOppSynopsis[AgencyMailingAddress = 'This is a test opportunity. Please DO NOT apply!']"> </xsl:template> </xsl:stylesheet><xsl:strip-space elements="*"/> Part of the XML file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE Grants SYSTEM "http://www.grants.gov/search/dtd/XMLExtract.dtd"> <Grants> <FundingOppSynopsis> <PostDate>08312007</PostDate> <UserID>None</UserID> <Password>None</Password> <FundingInstrumentType>CA</FundingInstrumentType> <FundingActivityCategory>DPR</FundingActivityCategory> <OtherCategoryExplanation>This is a test opportunity. Please DO NOT apply!</OtherCategoryExplanation> <NumberOfAwards>5</NumberOfAwards> <EstimatedFunding>4</EstimatedFunding> <AwardCeiling>2</AwardCeiling> <AwardFloor>1</AwardFloor> <AgencyMailingAddress>This is a test opportunity. Please DO NOT apply!</AgencyMailingAddress> <FundingOppTitle>This is a test opportunity. Please DO NOT apply!</FundingOppTitle> <FundingOppNumber>IVV-08312007-RG-OPP5</FundingOppNumber> <ApplicationsDueDate>09102007</ApplicationsDueDate> <ApplicationsDueDateExplanation>This is a test opportunity. Please DO NOT apply!</ApplicationsDueDateExplanation> <ArchiveDate>10102007</ArchiveDate> <Location>None</Location> <Office>None</Office> <Agency>None</Agency> <FundingOppDescription>This is a test opportunity. Please DO NOT apply!</FundingOppDescription> <CFDANumber>000000</CFDANumber> <EligibilityCategory>21</EligibilityCategory> <AdditionalEligibilityInfo>This is a test opportunity. Please DO NOT apply!</AdditionalEligibilityInfo> <CostSharing>N</CostSharing> <ObtainFundingOppText FundingOppURL="">Not Available</ObtainFundingOppText> <AgencyContact AgencyEmailDescriptor="This is a test opportunity. Please DO NOT apply!" AgencyEmailAddress="This is a test opportunity. Please DO NOT apply!">This is a test opportunity. Please DO NOT apply!</AgencyContact> </FundingOppSynopsis> How can I fix the error and remove expired entries (based on ArchiveDate)? I suspect the second to last line in feddata.xsl needs to be fixed.

    Read the article

  • Will these optimizations to my Ruby implementation of diff improve performance in a Rails app?

    - by grg-n-sox
    <tl;dr> In source version control diff patch generation, would it be worth it to use the optimizations listed at the very bottom of this writing (see <optimizations>) in my Ruby implementation of diff for making diff patches? </tl;dr> <introduction> I am programming something I have never done before and there might already be tools out there to do the exact thing I am programming but at this point I am having too much fun to care so I am still going to do it from scratch, even if there is a tool for this. So anyways, I am working on a Ruby on Rails app and need a certain feature. Basically I want each entry in a table of mine, let's say for example a table of video games, to have a stored chunk of text that represents a review or something of the sort for that table entry. However, I want this text to be both editable by any registered user and also keep track of different submissions in a version control system. The simplest solution I could think of is just implement a solution that keeps track of the text body and the diff patch history of different versions of the text body as objects in Ruby and then serialize it, preferably in human readable form (so I'll most likely use YAML for this) for editing if needed due to corruption by a software bug or a mistake is made by an admin doing some version editing. So at first I just tried to dive in head first into this feature to find that the problem of generating a diff patch is more difficult that I thought to do efficiently. So I did some research and came across some ideas. Some I have implemented already and some I have not. However, it all pretty much revolves around the longest common subsequence problem, as you would already know if you have already done anything with diff or diff-like features, and optimization the function that solves it. Currently I have it so it truncates the compared versions of the text body from the beginning and end until non-matching lines are found. Then it solves the problem using a comparison matrix, but instead of incrementing the value stored in a cell when it finds a matching line like in most longest common subsequence algorithms I have seen examples of, I increment when I have a non-matching line so as to calculate edit distance instead of longest common subsequence. Although as far as I can tell between the two approaches, they are essentially two sides of the same coin so either could be used to derive an answer. It then back-traces through the comparison matrix and notes when there was an incrementation and in which adjacent cell (West, Northwest, or North) to determine that line's diff entry and assumes all other lines to be unchanged. Normally I would leave it at that, but since this is going into a Rails environment and not just some stand-alone Ruby script, I started getting worried about needing to optimize at least enough so if a spammer that somehow knew how I implemented the version control system and knew my worst case scenario entry still wouldn't be able to hit the server that bad. After some searching and reading of research papers and articles through the internet, I've come across several that seem decent but all seem to have pros and cons and I am having a hard time deciding how well in this situation that the pros and cons balance out. So are the ones listed here worth it? I have listed them with known pros and cons. </introduction> <optimizations> Chop the compared sequences into multiple chucks of subsequences by splitting where lines are unchanged, and then truncating each section of unchanged lines at the beginning and end of each section. Then solve the edit distance of each subsequence. Pro: Changes the time increase as the changed area gets bigger from a quadratic increase to something more similar to a linear increase. Con: Figuring out where to split already seems like you have to solve edit distance except now you don't care how it is changed. Would be fine if this was solvable by a process closer to solving hamming distance but a single insertion would throw this off. Use a cryptographic hash function to both convert all sequence elements into integers and ensure uniqueness. Then solve the edit distance comparing the hash integers instead of the sequence elements themselves. Pro: The operation of comparing two integers is faster than the operation of comparing two strings, so a slight performance gain is received after every comparison, which can be a lot overall. Con: Using a cryptographic hash function takes time to convert all the sequence elements and may end up costing more time to do the conversion that you gain back from the integer comparisons. You could use the built in hash function for a string but that will not guarantee uniqueness. Use lazy evaluation to only calculate the three center-most diagonals of the comparison matrix and then only calculate additional diagonals as needed. And then also use this approach to possibly remove the need on some comparisons to compare all three adjacent cells as desribed here. Pro: Can turn an algorithm that always takes O(n * m) time and make it so only worst case scenario is that time, best case becomes practically linear, and average case is somewhere between the two. Con: It is an algorithm I've only seen implemented in functional programming languages and I am having a difficult time comprehending how to convert this into Ruby based on how it is described at the site linked to above. Make a C module and do the hard work at the native level in C and just make a Ruby wrapper for it so Ruby can make all the calls to it that it needs. Pro: I have to imagine that evaluating something like this in could be a LOT faster. Con: I have no idea how Rails handles apps with ruby code that has C extensions and it hurts the portability of the app. This is an optimization for after the solving of edit distance, but idea is to store additional combined diffs with the ones produced by each version to make a delta-tree data structure with the most recently made diff as the root node of the tree so getting to any version takes worst case time of O(log n) instead of O(n). Pro: Would make going back to an old version a lot faster. Con: It would mean every new commit, the delta-tree would get a new root node that will cost time to reorganize the delta-tree for an operation that will be carried out a lot more often than going back a version, not to mention the unlikelihood it will be an old version. </optimizations> So are these things worth the effort?

    Read the article

  • Why can't I assign a scalar value to a class using shorthand, but instead declare it first, then set

    - by ~delan-azabani
    I am writing a UTF-8 library for C++ as an exercise as this is my first real-world C++ code. So far, I've implemented concatenation, character indexing, parsing and encoding UTF-8 in a class called "ustring". It looks like it's working, but two (seemingly equivalent) ways of declaring a new ustring behave differently. The first way: ustring a; a = "test"; works, and the overloaded "=" operator parses the string into the class (which stores the Unicode strings as an dynamically allocated int pointer). However, the following does not work: ustring a = "test"; because I get the following error: test.cpp:4: error: conversion from ‘const char [5]’ to non-scalar type ‘ustring’ requested Is there a way to workaround this error? It probably is a problem with my code, though. The following is what I've written so far for the library: #include <cstdlib> #include <cstring> class ustring { int * values; long len; public: long length() { return len; } ustring * operator=(ustring input) { len = input.len; values = (int *) malloc(sizeof(int) * len); for (long i = 0; i < len; i++) values[i] = input.values[i]; return this; } ustring * operator=(char input[]) { len = sizeof(input); values = (int *) malloc(0); long s = 0; // s = number of parsed chars int a, b, c, d, contNeed = 0, cont = 0; for (long i = 0; i < sizeof(input); i++) if (input[i] < 0x80) { // ASCII, direct copy (00-7f) values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = input[i]; } else if (input[i] < 0xc0) { // this is a continuation (80-bf) if (cont == contNeed) { // no need for continuation, use U+fffd values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = 0xfffd; } cont = cont + 1; values[s - 1] = values[s - 1] | ((input[i] & 0x3f) << ((contNeed - cont) * 6)); if (cont == contNeed) cont = contNeed = 0; } else if (input[i] < 0xc2) { // invalid byte, use U+fffd (c0-c1) values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = 0xfffd; } else if (input[i] < 0xe0) { // start of 2-byte sequence (c2-df) contNeed = 1; values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = (input[i] & 0x1f) << 6; } else if (input[i] < 0xf0) { // start of 3-byte sequence (e0-ef) contNeed = 2; values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = (input[i] & 0x0f) << 12; } else if (input[i] < 0xf5) { // start of 4-byte sequence (f0-f4) contNeed = 3; values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = (input[i] & 0x07) << 18; } else { // restricted or invalid (f5-ff) values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = 0xfffd; } return this; } ustring operator+(ustring input) { ustring result; result.len = len + input.len; result.values = (int *) malloc(sizeof(int) * result.len); for (long i = 0; i < len; i++) result.values[i] = values[i]; for (long i = 0; i < input.len; i++) result.values[i + len] = input.values[i]; return result; } ustring operator[](long index) { ustring result; result.len = 1; result.values = (int *) malloc(sizeof(int)); result.values[0] = values[index]; return result; } char * encode() { char * r = (char *) malloc(0); long s = 0; for (long i = 0; i < len; i++) { if (values[i] < 0x80) r = (char *) realloc(r, s + 1), r[s + 0] = char(values[i]), s += 1; else if (values[i] < 0x800) r = (char *) realloc(r, s + 2), r[s + 0] = char(values[i] >> 6 | 0x60), r[s + 1] = char(values[i] & 0x3f | 0x80), s += 2; else if (values[i] < 0x10000) r = (char *) realloc(r, s + 3), r[s + 0] = char(values[i] >> 12 | 0xe0), r[s + 1] = char(values[i] >> 6 & 0x3f | 0x80), r[s + 2] = char(values[i] & 0x3f | 0x80), s += 3; else r = (char *) realloc(r, s + 4), r[s + 0] = char(values[i] >> 18 | 0xf0), r[s + 1] = char(values[i] >> 12 & 0x3f | 0x80), r[s + 2] = char(values[i] >> 6 & 0x3f | 0x80), r[s + 3] = char(values[i] & 0x3f | 0x80), s += 4; } return r; } };

    Read the article

  • Need Help Customizing a Grammar Checking Replace Rule in Java

    - by user567785
    Hello, I am currently adding the Khmer (Cambodian) language to LanguageTool, an opensource grammar checker for OpenOffice (http://www.languagetool.org). I don't know enough Java to customize one of the scripts and wanted to make a request here asking if anyone would be willing to customize it for me (I can put link to your website at http://www.sbbic.org/lang/en-us/volunteer/ if you help). Here is the script that needs customization KhmerWordCoherencyRule.java: /* LanguageTool, a natural language style checker * Copyright (C) 2005 Daniel Naber (http://www.danielnaber.de) * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 * USA */ package de.danielnaber.languagetool.rules.km; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Locale; import java.util.Map; import java.util.ResourceBundle; import de.danielnaber.languagetool.AnalyzedSentence; import de.danielnaber.languagetool.AnalyzedToken; import de.danielnaber.languagetool.AnalyzedTokenReadings; import de.danielnaber.languagetool.JLanguageTool; import de.danielnaber.languagetool.tools.StringTools; import de.danielnaber.languagetool.rules.Category; import de.danielnaber.languagetool.rules.RuleMatch; /** * A Khmer rule that matches words or phrases which should not be used and suggests * correct ones instead. Loads the relevant words from * <code>rules/km/coherency.txt</code>, where km is a code of the language. * * @author Andriy Rysin */ public abstract class KhmerWordCoherencyRule extends KhmerRule { private static final String FILE_ENCODING = "utf-8"; private Map<String, String> wrongWords; // e.g. "????? -> "?????" private static final String FILE_NAME = "/km/coherency.txt"; public abstract String getFileName(); public String getEncoding() { return FILE_ENCODING; } /** * Indicates if the rule is case-sensitive. Default value is <code>true</code>. * @return true if the rule is case-sensitive, false otherwise. */ //in Khmer there is no case public boolean isCaseSensitive() { return false; } /** * @return the locale used for case conversion when {@link #isCaseSensitive()} is set to <code>false</code>. */ public Locale getLocale() { return Locale.getDefault(); } public KhmerWordCoherencyRule(final ResourceBundle messages) throws IOException { if (messages != null) { super.setCategory(new Category(messages.getString("category_misc"))); } wrongWords = loadWords(JLanguageTool.getDataBroker().getFromRulesDirAsStream(getFileName())); } public String getId() { return "KM_WORD_COHERENCY"; } public String getDescription() { return "Checks for wrong words/phrases"; } public String getSuggestion() { return " does not match your previous spelling of the word, use "; } public String getShort() { return "Use a consistant spelling throughout"; } public final RuleMatch[] match(final AnalyzedSentence text) { final List<RuleMatch> ruleMatches = new ArrayList<RuleMatch>(); final AnalyzedTokenReadings[] tokens = text.getTokensWithoutWhitespace(); for (int i = 1; i < tokens.length; i++) { final String token = tokens[i].getToken(); final String origToken = token; final String replacement = isCaseSensitive()?wrongWords.get(token):wrongWords.get(token.toLowerCase(getLocale())); if (replacement != null) { final String msg = token + getSuggestion() + replacement; final int pos = tokens[i].getStartPos(); final RuleMatch potentialRuleMatch = new RuleMatch(this, pos, pos + origToken.length(), msg, getShort()); if (!isCaseSensitive() && StringTools.startsWithUppercase(token)) { potentialRuleMatch.setSuggestedReplacement(StringTools.uppercaseFirstChar(replacement)); } else { potentialRuleMatch.setSuggestedReplacement(replacement); } ruleMatches.add(potentialRuleMatch); } } return toRuleMatchArray(ruleMatches); } private Map<String, String> loadWords(final InputStream file) throws IOException { final Map<String, String> map = new HashMap<String, String>(); InputStreamReader isr = null; BufferedReader br = null; try { isr = new InputStreamReader(file, getEncoding()); br = new BufferedReader(isr); String line; while ((line = br.readLine()) != null) { line = line.trim(); if (line.length() < 1) { continue; } if (line.charAt(0) == '#') { // ignore comments continue; } final String[] parts = line.split(";"); if (parts.length != 2) { throw new IOException("Format error in file " + JLanguageTool.getDataBroker().getFromRulesDirAsUrl(getFileName()) + ", line: " + line); } map.put(parts[0], parts[1]); } } finally { if (br != null) { br.close(); } if (isr != null) { isr.close(); } } return map; } public void reset() { } } Here is what I need the SimpleReplaceRule.java to do: 1 - Be able to have more than two spelling variations in the coherency.txt file (right now it can only be Word1;Word2). 2 - Find the first use of ANY of the spelling variations in a document that are found in coherency.txt and then make sure only that spelling is used throughout the document (ex. in the coherency.txt I have Word1;Word2;Word3 then in my document on the first line I write Word2. then on next line I write Word1 and Word 3 - then the grammar checker will flag Word1 and Word3 saying that I should use the spelling "Word2" instead...etc.). If anyone can help I would be grateful! Thanks for your time, Nathan

    Read the article

  • Higher order function « filter » in C++

    - by Red Hyena
    Hi all. I wanted to write a higher order function filter with C++. The code I have come up with so far is as follows: #include <iostream> #include <string> #include <functional> #include <algorithm> #include <vector> #include <list> #include <iterator> using namespace std; bool isOdd(int const i) { return i % 2 != 0; } template < template <class, class> class Container, class Predicate, class Allocator, class A > Container<A, Allocator> filter(Container<A, Allocator> const & container, Predicate const & pred) { Container<A, Allocator> filtered(container); container.erase(remove_if(filtered.begin(), filtered.end(), pred), filtered.end()); return filtered; } int main() { int const a[] = {23, 12, 78, 21, 97, 64}; vector<int const> const v(a, a + 6); vector<int const> const filtered = filter(v, isOdd); copy(filtered.begin(), filtered.end(), ostream_iterator<int const>(cout, " ")); } However on compiling this code, I get the following error messages that I am unable to understand and hence get rid of: /usr/include/c++/4.3/ext/new_allocator.h: In instantiation of ‘__gnu_cxx::new_allocator<const int>’: /usr/include/c++/4.3/bits/allocator.h:84: instantiated from ‘std::allocator<const int>’ /usr/include/c++/4.3/bits/stl_vector.h:75: instantiated from ‘std::_Vector_base<const int, std::allocator<const int> >’ /usr/include/c++/4.3/bits/stl_vector.h:176: instantiated from ‘std::vector<const int, std::allocator<const int> >’ Filter.cpp:29: instantiated from here /usr/include/c++/4.3/ext/new_allocator.h:82: error: ‘const _Tp* __gnu_cxx::new_allocator<_Tp>::address(const _Tp&) const [with _Tp = const int]’ cannot be overloaded /usr/include/c++/4.3/ext/new_allocator.h:79: error: with ‘_Tp* __gnu_cxx::new_allocator<_Tp>::address(_Tp&) const [with _Tp = const int]’ Filter.cpp: In function ‘Container<A, Allocator> filter(const Container<A, Allocator>&, const Predicate&) [with Container = std::vector, Predicate = bool ()(int), Allocator = std::allocator<const int>, A = const int]’: Filter.cpp:30: instantiated from here Filter.cpp:23: error: passing ‘const std::vector<const int, std::allocator<const int> >’ as ‘this’ argument of ‘__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> > std::vector<_Tp, _Alloc>::erase(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, __gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >) [with _Tp = const int, _Alloc = std::allocator<const int>]’ discards qualifiers /usr/include/c++/4.3/bits/stl_algo.h: In function ‘_FIter std::remove_if(_FIter, _FIter, _Predicate) [with _FIter = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >, _Predicate = bool (*)(int)]’: Filter.cpp:23: instantiated from ‘Container<A, Allocator> filter(const Container<A, Allocator>&, const Predicate&) [with Container = std::vector, Predicate = bool ()(int), Allocator = std::allocator<const int>, A = const int]’ Filter.cpp:30: instantiated from here /usr/include/c++/4.3/bits/stl_algo.h:821: error: assignment of read-only location ‘__result.__gnu_cxx::__normal_iterator<_Iterator, _Container>::operator* [with _Iterator = const int*, _Container = std::vector<const int, std::allocator<const int> >]()’ /usr/include/c++/4.3/ext/new_allocator.h: In member function ‘void __gnu_cxx::new_allocator<_Tp>::deallocate(_Tp*, size_t) [with _Tp = const int]’: /usr/include/c++/4.3/bits/stl_vector.h:150: instantiated from ‘void std::_Vector_base<_Tp, _Alloc>::_M_deallocate(_Tp*, size_t) [with _Tp = const int, _Alloc = std::allocator<const int>]’ /usr/include/c++/4.3/bits/stl_vector.h:136: instantiated from ‘std::_Vector_base<_Tp, _Alloc>::~_Vector_base() [with _Tp = const int, _Alloc = std::allocator<const int>]’ /usr/include/c++/4.3/bits/stl_vector.h:286: instantiated from ‘std::vector<_Tp, _Alloc>::vector(_InputIterator, _InputIterator, const _Alloc&) [with _InputIterator = const int*, _Tp = const int, _Alloc = std::allocator<const int>]’ Filter.cpp:29: instantiated from here /usr/include/c++/4.3/ext/new_allocator.h:98: error: invalid conversion from ‘const void*’ to ‘void*’ /usr/include/c++/4.3/ext/new_allocator.h:98: error: initializing argument 1 of ‘void operator delete(void*)’ /usr/include/c++/4.3/bits/stl_algobase.h: In function ‘_OI std::__copy_move_a(_II, _II, _OI) [with bool _IsMove = false, _II = const int*, _OI = const int*]’: /usr/include/c++/4.3/bits/stl_algobase.h:435: instantiated from ‘_OI std::__copy_move_a2(_II, _II, _OI) [with bool _IsMove = false, _II = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >, _OI = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >]’ /usr/include/c++/4.3/bits/stl_algobase.h:466: instantiated from ‘_OI std::copy(_II, _II, _OI) [with _II = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >, _OI = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >]’ /usr/include/c++/4.3/bits/vector.tcc:136: instantiated from ‘__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> > std::vector<_Tp, _Alloc>::erase(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, __gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >) [with _Tp = const int, _Alloc = std::allocator<const int>]’ Filter.cpp:23: instantiated from ‘Container<A, Allocator> filter(const Container<A, Allocator>&, const Predicate&) [with Container = std::vector, Predicate = bool ()(int), Allocator = std::allocator<const int>, A = const int]’ Filter.cpp:30: instantiated from here /usr/include/c++/4.3/bits/stl_algobase.h:396: error: no matching function for call to ‘std::__copy_move<false, true, std::random_access_iterator_tag>::__copy_m(const int*&, const int*&, const int*&)’ Please tell me what I am doing wrong here and what is the correct way to achieve the kind of higher order polymorphism I want. Thanks.

    Read the article

  • Assignment operator that calls a constructor is broken

    - by Delan Azabani
    I've implemented some of the changes suggested in this question, and (thanks very much) it works quite well, however... in the process I've seemed to break the post-declaration assignment operator. With the following code: #include <cstdio> #include "ucpp" main() { ustring a = "test"; ustring b = "ing"; ustring c = "- -"; ustring d = "cafe\xcc\x81"; printf("%s\n", (a + b + c[1] + d).encode()); } I get a nice "testing cafe´" message. However, if I modify the code slightly so that the const char * conversion is done separately, post-declaration: #include <cstdio> #include "ucpp" main() { ustring a = "test"; ustring b = "ing"; ustring c = "- -"; ustring d; d = "cafe\xcc\x81"; printf("%s\n", (a + b + c[1] + d).encode()); } the ustring named d becomes blank, and all that is output is "testing ". My new code has three constructors, one void (which is probably the one being incorrectly used, and is used in the operator+ function), one that takes a const ustring &, and one that takes a const char *. The following is my new library code: #include <cstdlib> #include <cstring> class ustring { int * values; long len; public: long length() { return len; } ustring() { len = 0; values = (int *) malloc(0); } ustring(const ustring &input) { len = input.len; values = (int *) malloc(sizeof(int) * len); for (long i = 0; i < len; i++) values[i] = input.values[i]; } ustring operator=(ustring input) { ustring result(input); return result; } ustring(const char * input) { values = (int *) malloc(0); long s = 0; // s = number of parsed chars int a, b, c, d, contNeed = 0, cont = 0; for (long i = 0; input[i]; i++) if (input[i] < 0x80) { // ASCII, direct copy (00-7f) values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = input[i]; } else if (input[i] < 0xc0) { // this is a continuation (80-bf) if (cont == contNeed) { // no need for continuation, use U+fffd values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = 0xfffd; } cont = cont + 1; values[s - 1] = values[s - 1] | ((input[i] & 0x3f) << ((contNeed - cont) * 6)); if (cont == contNeed) cont = contNeed = 0; } else if (input[i] < 0xc2) { // invalid byte, use U+fffd (c0-c1) values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = 0xfffd; } else if (input[i] < 0xe0) { // start of 2-byte sequence (c2-df) contNeed = 1; values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = (input[i] & 0x1f) << 6; } else if (input[i] < 0xf0) { // start of 3-byte sequence (e0-ef) contNeed = 2; values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = (input[i] & 0x0f) << 12; } else if (input[i] < 0xf5) { // start of 4-byte sequence (f0-f4) contNeed = 3; values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = (input[i] & 0x07) << 18; } else { // restricted or invalid (f5-ff) values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = 0xfffd; } len = s; } ustring operator=(const char * input) { ustring result(input); return result; } ustring operator+(ustring input) { ustring result; result.len = len + input.len; result.values = (int *) malloc(sizeof(int) * result.len); for (long i = 0; i < len; i++) result.values[i] = values[i]; for (long i = 0; i < input.len; i++) result.values[i + len] = input.values[i]; return result; } ustring operator[](long index) { ustring result; result.len = 1; result.values = (int *) malloc(sizeof(int)); result.values[0] = values[index]; return result; } char * encode() { char * r = (char *) malloc(0); long s = 0; for (long i = 0; i < len; i++) { if (values[i] < 0x80) r = (char *) realloc(r, s + 1), r[s + 0] = char(values[i]), s += 1; else if (values[i] < 0x800) r = (char *) realloc(r, s + 2), r[s + 0] = char(values[i] >> 6 | 0x60), r[s + 1] = char(values[i] & 0x3f | 0x80), s += 2; else if (values[i] < 0x10000) r = (char *) realloc(r, s + 3), r[s + 0] = char(values[i] >> 12 | 0xe0), r[s + 1] = char(values[i] >> 6 & 0x3f | 0x80), r[s + 2] = char(values[i] & 0x3f | 0x80), s += 3; else r = (char *) realloc(r, s + 4), r[s + 0] = char(values[i] >> 18 | 0xf0), r[s + 1] = char(values[i] >> 12 & 0x3f | 0x80), r[s + 2] = char(values[i] >> 6 & 0x3f | 0x80), r[s + 3] = char(values[i] & 0x3f | 0x80), s += 4; } return r; } };

    Read the article

  • Memory limit for running external executables within Asp.net

    - by itsbalur
    I am using WkhtmltoPdf in my C# web application running in .NET 4.0 to generate PDFs from HTML files. In general everything works fine except when the size of the HTML file is below 250KB. Once the HTML file size increases beyond that, the process which runs the wkhtmltopdf.exe gives an exception as below. On the Task Manager, I have seen that the Memory value for the wkhtmltopdf.exe process does not increase beyond a value of 40,096 K, which I believe is the reason why the process is abandoned in between. How can we configure such that the memory limit for external exes can be increased? Is there any other way of solving this issue? More info: When I run the conversion from the command line directly, the PDF is generated fine. So, its unlikely to be a problem with WkhtmlToPdf. The error is from localhost. I have tried the same on the DEV server, with the same result. Exception: > [Exception: Loading pages (1/6) [> > ] 0% [======> ] > 10% [======> ] 11% > [=======> ] 13% > [=========> ] 15% > [==========> ] 18% > [============> ] 20% > [=============> ] 22% > [==============> ] 24% > [===============> ] 26% > [=================> ] 29% > [==================> ] 31% > [===================> ] 33% > [=====================> ] 35% > [======================> ] 37% > [========================> ] 40% > [=========================> ] 42% > [==========================> ] 44% > [============================> ] 47% > [=============================> ] 49% > [==============================> ] 51% > [============================================================] 100% > Counting pages (2/6) > [============================================================] Object > 1 of 1 Resolving links (4/6) > [============================================================] Object > 1 of 1 Loading headers and footers (5/6) > Printing pages (6/6) [> > ] Preparing [=> > ] Page 1 of 49 [==> > ] Page 2 of 49 [===> > ] Page 3 of 49 [====> > ] Page 4 of 49 [======> > ] Page 5 of 49 [=======> > ] Page 6 of 49 [========> > ] Page 7 of 49 [=========> > ] Page 8 of 49 [==========> > ] Page 9 of 49 [============> > ] Page 10 of 49 [=============> > ] Page 11 of 49 [==============> > ] Page 12 of 49 [===============> > ] Page 13 of 49 [================> > ] Page 14 of 49 [==================> > ] Page 15 of 49 [===================> > ] Page 16 of 49 [====================> > ] Page 17 of 49 [=====================> > ] Page 18 of 49 [======================> > ] Page 19 of 49 [========================> > ] Page 20 of 49 [=========================> > ] Page 21 of 49 [==========================> > ] Page 22 of 49 [===========================> > ] Page 23 of 49 [============================> > ] Page 24 of 49 [==============================> > ] Page 25 of 49 [===============================> > ] Page 26 of 49 [=================================> > ] Page 27 of 49 [==================================> > ] Code that I use: var fileName = " - "; var wkhtmlDir = ConfigurationManager.AppSettings[Constants.AppSettings.ExportToPdfExecutablePath]; var wkhtml = ConfigurationManager.AppSettings[Constants.AppSettings.ExportToPdfExecutablePath] + "\\wkhtmltopdf.exe"; var p = new Process(); string switches = ""; switches += "--print-media-type "; switches += "--margin-top 10mm --margin-bottom 10mm --margin-right 5mm --margin-left 5mm "; switches += "--page-size A4 "; switches += "--disable-smart-shrinking "; var startInfo = new ProcessStartInfo { CreateNoWindow = true, FileName = wkhtml, Arguments = switches + " " + url + " " + fileName, UseShellExecute = false, RedirectStandardOutput = true, RedirectStandardError = true, RedirectStandardInput=true, WorkingDirectory=wkhtmlDir }; p.StartInfo = startInfo; p.Start();

    Read the article

  • localhost/phpmyadmin pulls blank page

    - by Atul Modi
    When I tried configuring local machine as a Internet Gateway with website development capabilities over it and I installed all required software into it. I also had disable the selinux into it. But PROBLEM is when I do http://localhost/phpMyAdmin or all lower case than the page shows it as a blank page. I am pasting code from httpd.conf file into this as well as from phpMyAdmin.conf file I am using Fedora 16 for this. httpd.conf ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 5 StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 Listen 80 LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authn_dbd_module modules/mod_authn_dbd.so LoadModule dbd_module modules/mod_dbd.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule version_module modules/mod_version.so Include conf.d/*.conf User apache Group apache ServerAdmin root@localhost UseCanonicalName Off DocumentRoot "/var/www/html" Options FollowSymLinks AllowOverride None Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all UserDir disabled DirectoryIndex index.html index.htm index.php AccessFileName .htaccess Order allow,deny Deny from all Satisfy All TypesConfig /etc/mime.types DefaultType text/plain MIMEMagicFile conf/magic HostnameLookups Off ErrorLog logs/error_log LogLevel warn LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %s %b" common LogFormat "%{Referer}i - %U" referer LogFormat "%{User-agent}i" agent CustomLog logs/access_log combined ServerSignature On Alias /icons/ "/var/www/icons/" Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all # Location of the WebDAV lock database. DAVLockDB /var/lib/dav/lockdb ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" AllowOverride None Options None Order allow,deny Allow from all IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable Charset=UTF-8 AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip AddIconByType (TXT,/icons/text.gif) text/* AddIconByType (IMG,/icons/image2.gif) image/* AddIconByType (SND,/icons/sound2.gif) audio/* AddIconByType (VID,/icons/movie.gif) video/* AddIcon /icons/binary.gif .bin .exe AddIcon /icons/binhex.gif .hqx AddIcon /icons/tar.gif .tar AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip AddIcon /icons/a.gif .ps .ai .eps AddIcon /icons/layout.gif .html .shtml .htm .pdf AddIcon /icons/text.gif .txt AddIcon /icons/c.gif .c AddIcon /icons/p.gif .pl .py AddIcon /icons/f.gif .for AddIcon /icons/dvi.gif .dvi AddIcon /icons/uuencoded.gif .uu AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl AddIcon /icons/tex.gif .tex AddIcon /icons/bomb.gif core AddIcon /icons/back.gif .. AddIcon /icons/hand.right.gif README AddIcon /icons/folder.gif ^^DIRECTORY^^ AddIcon /icons/blank.gif ^^BLANKICON^^ DefaultIcon /icons/unknown.gif ReadmeName README.html HeaderName HEADER.html IndexIgnore .??* *~ # HEADER README* RCS CVS *,v *,t AddLanguage ca .ca AddLanguage cs .cz .cs AddLanguage da .dk AddLanguage de .de AddLanguage el .el AddLanguage en .en AddLanguage eo .eo AddLanguage es .es AddLanguage et .et AddLanguage fr .fr AddLanguage he .he AddLanguage hr .hr AddLanguage it .it AddLanguage ja .ja AddLanguage ko .ko AddLanguage ltz .ltz AddLanguage nl .nl AddLanguage nn .nn AddLanguage no .no AddLanguage pl .po AddLanguage pt .pt AddLanguage pt-BR .pt-br AddLanguage ru .ru AddLanguage sv .sv AddLanguage zh-CN .zh-cn AddLanguage zh-TW .zh-tw LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW ForceLanguagePriority Prefer Fallback AddDefaultCharset UTF-8 AddType application/x-tar .tgz AddType application/x-httpd-php .php AddType application/x-httpd-php .xml AddHandler application/x-httpd-php .xml AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl AddHandler type-map var AddType text/html .shtml AddOutputFilter INCLUDES .shtml Alias /error/ "/var/www/error/" AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en ForceLanguagePriority Prefer Fallback ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var BrowserMatch "Mozilla/2" nokeepalive BrowserMatch "MSIE 4.0b2;" nokeepalive downgrade-1.0 force-response-1.0 BrowserMatch "RealPlayer 4.0" force-response-1.0 BrowserMatch "Java/1.0" force-response-1.0 BrowserMatch "JDK/1.0" force-response-1.0 BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully BrowserMatch "MS FrontPage" redirect-carefully BrowserMatch "^WebDrive" redirect-carefully BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully BrowserMatch "^gnome-vfs/1.0" redirect-carefully BrowserMatch "^XML Spy" redirect-carefully BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully Order allow,deny Allow from all # phpMyAdmin.conf Alias /phpMyAdmin /usr/share/phpMyAdmin Alias /phpmyadmin /usr/share/phpMyAdmin Order Allow,Deny Allow from All Allow from 127.0.0.1 Allow from ::1 Order Allow,Deny Allow from All Allow from 127.0.0.1 Allow from ::1 Order Deny,Allow Deny from All Allow from None Order Deny,Allow Deny from All Allow from None Order Deny,Allow Deny from All Allow from None Can anyone help into this area please. Urgent reply will be appreciatable because i am struggling since one and half month for this matter. thank you, Atul

    Read the article

  • How does one get rid of fishy behavior in Windows?

    - by Tom Wijsman
    After I had boot my computer this morning there suddenly flooded water from the top of the screen, after which some fishes dropped into it. Now I can barely see what I am doing because the water distorts the view. Sometimes the fish follow the cursor so I need to move it away or wait for the fish to mind their own business. This makes it very annoying to use my system. What have I tried? Reboot the system. This caused the water to deplete from the desktop. Upon reboot, the screen was refilled with water and fishes. Attach another monitor. Same problem, fills that monitor as well and gives me extra fish. Clicking the fish. Makes them turn direction. Right clicking the fish. Changes color of the fish, not really useful. I'm locked out of changing the background or screen saver settings. Hence, I had to post the lady below... Safe mode doesn't save me from the fishes. It does give me another background there, but I can't screenshot easily. Other user accounts experience this as well. The Guest account seems to experience more fish than the other accounts. Using HijackThis, OTL Timekeeper List, Syninternal Autoruns, RootKitRevealer, ShellExView and similar tools I can't seem to find any entries that could be it, the Sysinternals tools show everything as verified. I'm suspecting this to be a driver problem. Randomly removing drivers doesn't seem to alleviate the problem. When removing the Graphics Drivers, it makes my screen black. While that could be considered the solution, it's not what I want. Changing the time / date settings does also not seem to affect the fishes. Changing the time a few years in the future, I would have expected the fishes to be dead. But, the same fishes are still there... They simply won't die! Tried to get used to them. They are really bothering me, looks like they require food. I don't know how to give them food, but apparently they get it elsewhere during reboot... Tried to disable my mouse pointer and use the keyboard. This works, they now swim around more randomly. They do put their attention to huge changes on the screen, so I need to type slow. Or otherwise I can't see what I'm tying exactly. Hold my laptop upside down. This seems to affect the water and fishes, but the water stays in the screen. They seem super resistant against water sickness and confusion though... What does the problem look like? What do I need? A way to get rid of these fishes on my screen forever, they are really annoying me a lot and I'm about to crack the screen to see if that makes them escape. Do you have any idea why this problem is occurring? What are my considerations? Buying an USB fish tank could make the fish leave the screen, I am uncertain though whether the fish could leave the screen through the USB cable. Using the FISh (programming language) which seems to provide EXPRESSIVE POWER and EFFICIENT EXECUTION, I can however not find any examples on how to remove fish. What are my Specifications? I'm using a Sony Vaio Fishy laptop. Sony VAIO VGN-Fishy, VAIO. Processor: 1337 MHz, Intel Core 2 Duo, T5432, 1 MB, Intel PM965 Express, 667 MHz. Memory: 1024 MB, DDR2-SDRAM, 667 MHz, 2 x 1024 MB, 4 GB. Disk Drive: 50 GB, Serial ATA, 5400 RPM. Storage Media: Memory Stick™, Memory Stick PRO™. Display: 15.4 ", 1280 x 800 pixels, LCD. Video: GeForce 8400M GT, 128 MB. Optical Drive: DVD±R/RW DL, 24 x, 24 x, 24 x, 6 x, 4 x, 6 x, 4 x, 5 x, 5 x, 8 x, 8 x, 8 x, 8 x, 6 x, 6 x, 24 x, 24 x, 24 x, 16 x. Camera: 1.3 MP, 30 fps. Networking: 2.0+EDR. Keyboard: Touchpad, AZERTY. Operating System/Software: Windows Vista Home Premium. Security: Kensington. Weight & Dimensions: 98.8 oz (2800 g), 14 " (355.8 mm), 10 " (254.4 mm), 0.98 " (24.9 mm). Other features: 100 BASE-TX/10 BASE-T, 802.11a/b/g/n/Draft n, V92/V.90, fishes. Plz! Help me...

    Read the article

  • Problem installing Ubuntu 10.04 64 bit side by side with Vista by using a bootable USB drive. What n

    - by Adam Siddhi
    What happened I decided to install Ubuntu 10.04 64 bit side by side with Vista Home Premium (I guess on another partition) with a USB stick. I found instructions on how to do this here: https://help.ubuntu.com/community/Installation/FromUSBStick To create the bootable USB drive I had to download a program called Unetbootin. That process was simple enough. All I had to do was just choose the disk image option, select the ubuntu-10.04-desktop-amd64.iso image, make sure it recognizes my USB drive and then press OK. It takes only like a few minutes to create a working bootable USB drive. Then I have to restart my computer, enter the BIOS, select my USB drive as the first boot drive, save options and continue with booting up. After this Ubuntu actually loads up. I think this is known as the Live version of Ubuntu so you can try it out before fully installing it. Any ways, on the Ubuntu 10.04 desktop I saw an installer. I click it and begin the installation process. Just so you know, I tried installing it 2 times. I will explain what happened each time: The first time I tried installing Ubuntu 10.04 I got stuck at step 4 of 7. I remember selecting the last option in the window which was Specify Partitions Manually (Advanced) I made my partition for Ubuntu like 52 gigs. I clicked forward and a little pop up window appeared saying Please Wait. So the installation process stalled on this window so I closed out of it and quit the installation process. So at this point I was worried because I had already selected the partition size and assumed it started making it. Since it stalled I had to quit out though. Anyways, once again I reached step 4 of 7 a decided to select the first option which is Install them side by side choosing between them each startup. I figured this was the safe way to go. I did that and the pop up window saying Please Wait popped up again but lasted only like 10 seconds. Then I got to I guess step 6 where it asks you to enter your desired name and password. Did that and clicked forward. The Ubuntu 10.04 installation load screen appeared and the loading bar at the bottom started filling up. So I got to 83% and stalled during the Importing other profile information (I think it was called this. I had the option to do this during I think step 6) process. So at this point I decided to get stop the installation process. I was getting very nervous. I tried to restart the computer but all that happened was that Ubuntu restarted. I finally got the computer to restart. I was pretty sure I had screwed something up big time by this point. As my computer was restarting I entered BIOS again and switched back to it booting from my main hard drive containing Vista. Saved it and continued the boot process. My worst fears were confirmed as Vista would not boot up. I mean I saw the little Microsoft Windows choppy animated green loading bar at the bottom of the screen and then boom! It decided to restart. When it restarted I had the option to run a memory test check to see if there was anything that needed to be repaired. That took like 20 minutes and at the end I saw that I did indeed have to repair something. I had to go through 2 repair processes. After each I had to restart the computer. The 2nd time it went through the repair process it said that it could not fully repair the damage. I was scared and restarted but Vista did load up. I got to my desktop and saw a message saying something like Repairs have been made, Please restart for changes to take effect I noticed that some Notification icons were missing and I could not hear volume in a video. Things were a bit funky. So I did restart and here I am. Now what?! So since I got back into Vista and thankfully have a working Internet connection I am trying to find answers to my problem (that is why I am writing this post). I am scared that I have partioned my hard drive 2 times after researching Installing Ubuntu 10.04 and seeing this post http://techie-buzz.com/foss/ubuntu-10-04-lts-installation-guide.html The author shows screen shots of installing Ubuntu 10.04. He shows the image of step 4 of 7 with a caption at the bottom. I will recreate it below: Select a partitioning option. Unless you want to format all the hard drive and install Ubuntu afresh, select the last option and proceed. Questions If I have indeed partitioned my HD 2 times (which I am sure it is), how do I get to a point where I can see all my bad, unfinished Ubuntu partitions and get rid of them? How do I clean this big mess up? & How can I ensure that this mess will not happen next time I try installing Ubuntu 10.04? Thank you Adam

    Read the article

  • Print driver installs failing

    - by Kasius
    All of the Windows 7 64-bit Enterprise machines in my organization are failing to install a good number of printer drivers that previously installed without issue. This only happens with printer drivers. And not with all printer drivers. Just some. Network drivers, video drivers, etc. have had no problems. Here is part of setupapi.dev.log for a Dymo LabelWriter printer driver that is failing to install: dvi: {Plug and Play Service: Device Install for USBPRINT\DYMOLABELWRITER_450_TURBO\6&538F51D&0&USB001} ump: Creating Install Process: DrvInst.exe 09:36:58.071 ndv: Infpath=C:\Windows\INF\oem0.inf ndv: DriverNodeName=dymo.inf:DYMO.NTamd64.6.0:LW_450_TURBO_VISTA:8.1.0.363:usbprint\dymolabelwriter_450_aa08 ndv: DriverStorepath=C:\Windows\System32\DriverStore\FileRepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf ndv: Building driver list from driver node strong name... dvi: Searching for hardware ID(s): dvi: usbprint\dymolabelwriter_450_aa08 dvi: dymolabelwriter_450_aa08 inf: Opened PNF: 'C:\Windows\System32\DriverStore\FileRepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf' ([strings]) dvi: Selected driver installs from section [LW_450_TURBO_VISTA] in 'c:\windows\system32\driverstore\filerepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf'. dvi: Class GUID of device changed to: {4d36e979-e325-11ce-bfc1-08002be10318}. dvi: Set selected driver complete. ndv: {Core Device Install} 09:36:58.133 inf: Opened INF: 'C:\Windows\INF\oem0.inf' ([strings]) inf: Saved PNF: 'C:\Windows\INF\oem0.PNF' (Language = 0409) dvi: {DIF_ALLOW_INSTALL} 09:36:58.164 dvi: Using exported function 'ClassInstall32' in module 'C:\Windows\system32\ntprint.dll'. dvi: Class installer == ntprint.dll,ClassInstall32 dvi: No CoInstallers found dvi: Class installer: Enter 09:36:58.164 dvi: Class installer: Exit dvi: Default installer: Enter 09:36:58.180 dvi: Default installer: Exit dvi: {DIF_ALLOW_INSTALL - exit(0xe000020e)} 09:36:58.180 ndv: Installing files... dvi: {DIF_INSTALLDEVICEFILES} 09:36:58.180 dvi: Class installer: Enter 09:36:58.180 inf: Opened INF: 'C:\Windows\System32\DriverStore\FileRepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf' ([strings]) inf: Opened INF: 'C:\Windows\System32\DriverStore\FileRepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf' ([strings]) !!! dvi: Class installer: failed(0x00000490)! !!! dvi: Error 1168: Element not found. dvi: {DIF_INSTALLDEVICEFILES - exit(0x00000490)} 09:37:22.063 ndv: Device install status=0x00000490 ndv: Performing device install final cleanup... ! ndv: Queueing up error report since device installation failed... ndv: {Core Device Install - exit(0x00000490)} 09:37:22.063 dvi: {DIF_DESTROYPRIVATEDATA} 09:37:22.063 dvi: Class installer: Enter 09:37:22.063 dvi: Class installer: Exit dvi: Default installer: Enter 09:37:22.063 dvi: Default installer: Exit dvi: {DIF_DESTROYPRIVATEDATA - exit(0xe000020e)} 09:37:22.063 ump: Server install process exited with code 0x00000490 09:37:22.063 ump: {Plug and Play Service: Device Install exit(00000490)} Notice these lines in particular: !!! dvi: Class installer: failed(0x00000490)! !!! dvi: Error 1168: Element not found. dvi: {DIF_INSTALLDEVICEFILES - exit(0x00000490)} 09:37:22.063 ndv: Device install status=0x00000490 From what I have read, the "Element not found" error should be accompanied by an event describing what element was not found. The error that appears in Device Manager is "The driver cannot be installed because it is either not digitally signed or not signed in the appropriate manner." It appears to be signed fine though. It has an accompanying .CAT file and worked previously. And when installing, the following messages are logged in setupapi.dev.log: sto: {DRIVERSTORE_IMPORT_NOTIFY_VALIDATE} 09:36:56.277 inf: Opened INF: 'C:\Windows\System32\DriverStore\Temp\{272e2305-961c-7942-9ede-966f01047043}\dymo.inf' ([strings]) sig: {_VERIFY_FILE_SIGNATURE} 09:36:56.292 sig: Key = dymo.inf sig: FilePath = C:\Windows\System32\DriverStore\Temp\{272e2305-961c-7942-9ede-966f01047043}\dymo.inf sig: Catalog = C:\Windows\System32\DriverStore\Temp\{272e2305-961c-7942-9ede-966f01047043}\DYMO.CAT sig: Success: File is signed in catalog. sig: {_VERIFY_FILE_SIGNATURE exit(0x00000000)} 09:36:56.355 sto: Validating driver package files against catalog 'DYMO.CAT'. sto: Driver package is valid. sto: {DRIVERSTORE_IMPORT_NOTIFY_VALIDATE exit(0x00000000)} 09:36:56.402 sto: Verified driver package signature: sto: Digital Signer Score = 0x0D000005 sto: Digital Signer Name = Microsoft Windows Hardware Compatibility Publisher Now here's where it gets strange. If I take it off the domain, it installs fine. But it doesn't seem to have anything to do with Group Policy. I moved the machine to an OU that blocks inheritance, ran a gpupdate, ran rsop.msc to verify, and tried again. And it still didn't work. Likewise, I removed a machine from the domain, manually set all of the domain Group Policy settings in gpedit.msc, and tried that way, and it worked fine. So it seems like the Group Policy settings are irrelevant. What other domain-related issue could be causing this though? Any ideas on what to try next would be greatly appreciated. I'm not sure where to go from here. Thanks!

    Read the article

  • System halts for a fraction of second after every 2-3 seconds

    - by iSam
    I'm using Windows 7 on my HP ProBook 4250s. The problem I face is that my system halts for a fraction of second after every 2-3 seconds. These jerks are not letting me concentrate or work properly. This happens even when I'm just typing in notepad while no other application is running. I tried to install every driver from HP's website and there's no item in device manager marked with yellow icon. Following are my system specs: Machine: HP ProBook 4250s OS: Windows 7 professional RAM: 2GB Processor: Intel Core i3 2.27GHz Following is my HijackThis Log: **Logfile of HijackThis v1.99.1** Scan saved at 9:34:03 PM, on 11/13/2012 Platform: Unknown Windows (WinNT 6.01.3504) MSIE: Internet Explorer v9.00 (9.00.8112.16450) **Running processes:** C:\Windows\system32\taskhost.exe C:\Windows\System32\rundll32.exe C:\Windows\system32\Dwm.exe C:\Windows\Explorer.EXE C:\Windows\System32\igfxtray.exe C:\Windows\System32\hkcmd.exe C:\Windows\System32\igfxpers.exe C:\Program Files\Synaptics\SynTP\SynTPEnh.exe C:\Program Files\PowerISO\PWRISOVM.EXE C:\Program Files\AVAST Software\Avast\AvastUI.exe C:\Program Files\Free Download Manager\fdm.exe C:\Windows\system32\wuauclt.exe C:\Program Files\Windows Media Player\wmplayer.exe C:\Program Files\Microsoft Office\Office12\WINWORD.EXE C:\HijackThis\HijackThis.exe R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKCU\Software\Microsoft\Internet Explorer\Main,Start Page = http://bing.com/ R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://go.microsoft.com/fwlink/?LinkId=69157 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://go.microsoft.com/fwlink/?LinkId=54896 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Start Page = http://go.microsoft.com/fwlink/?LinkId=69157 R0 - HKLM\Software\Microsoft\Internet Explorer\Search,SearchAssistant = R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch = R0 - HKCU\Software\Microsoft\Internet Explorer\Toolbar,LinksFolderName = R3 - URLSearchHook: (no name) - {7473b6bd-4691-4744-a82b-7854eb3d70b6} - (no file) O2 - BHO: Adobe PDF Reader Link Helper - {06849E9F-C8D7-4D59-B87D-784B7D6BE0B3} - C:\Program Files\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelper.dll O2 - BHO: Babylon toolbar helper - {2EECD738-5844-4a99-B4B6-146BF802613B} - (no file) O2 - BHO: MrFroggy - {856E12B5-22D7-4E22-9ACA-EA9A008DD65B} - C:\Program Files\Minibar\Froggy.dll O2 - BHO: avast! WebRep - {8E5E2654-AD2D-48bf-AC2D-D17F00898D06} - C:\Program Files\AVAST Software\Avast\aswWebRepIE.dll O2 - BHO: Windows Live ID Sign-in Helper - {9030D464-4C02-4ABF-8ECC-5164760863C6} - C:\Program Files\Common Files\Microsoft Shared\Windows Live\WindowsLiveLogin.dll O2 - BHO: Minibar BHO - {AA74D58F-ACD0-450D-A85E-6C04B171C044} - C:\Program Files\Minibar\Kango.dll O2 - BHO: Free Download Manager - {CC59E0F9-7E43-44FA-9FAA-8377850BF205} - C:\Program Files\Free Download Manager\iefdm2.dll O2 - BHO: HP Network Check Helper - {E76FD755-C1BA-4DCB-9F13-99BD91223ADE} - C:\Program Files\Hewlett-Packard\HP Support Framework\Resources\HPNetworkCheck\HPNetworkCheckPlugin.dll O3 - Toolbar: (no name) - {98889811-442D-49dd-99D7-DC866BE87DBC} - (no file) O3 - Toolbar: avast! WebRep - {8E5E2654-AD2D-48bf-AC2D-D17F00898D06} - C:\Program Files\AVAST Software\Avast\aswWebRepIE.dll O4 - HKLM\..\Run: [IgfxTray] C:\Windows\system32\igfxtray.exe O4 - HKLM\..\Run: [HotKeysCmds] C:\Windows\system32\hkcmd.exe O4 - HKLM\..\Run: [Persistence] C:\Windows\system32\igfxpers.exe O4 - HKLM\..\Run: [SynTPEnh] %ProgramFiles%\Synaptics\SynTP\SynTPEnh.exe O4 - HKLM\..\Run: [PWRISOVM.EXE] C:\Program Files\PowerISO\PWRISOVM.EXE -startup O4 - HKLM\..\Run: [AdobeAAMUpdater-1.0] "C:\Program Files\Common Files\Adobe\OOBE\PDApp\UWA\UpdaterStartupUtility.exe" O4 - HKLM\..\Run: [AdobeCS6ServiceManager] "C:\Program Files\Common Files\Adobe\CS6ServiceManager\CS6ServiceManager.exe" -launchedbylogin O4 - HKLM\..\Run: [SwitchBoard] C:\Program Files\Common Files\Adobe\SwitchBoard\SwitchBoard.exe O4 - HKLM\..\Run: [ROC_roc_ssl_v12] "C:\Program Files\AVG Secure Search\ROC_roc_ssl_v12.exe" / /PROMPT /CMPID=roc_ssl_v12 O4 - HKLM\..\Run: [avast] "C:\Program Files\AVAST Software\Avast\avastUI.exe" /nogui O4 - HKLM\..\Run: [Wordinn English to Urdu Dictionary] "C:\Program Files\Wordinn\Urdu Dictionary\bin\Lugat.exe" -h O4 - HKLM\..\Run: [Adobe Reader Speed Launcher] "C:\Program Files\Adobe\Reader 8.0\Reader\Reader_sl.exe" O4 - HKCU\..\Run: [Comparator Fast] "C:\Program Files\Interdesigner Software\Comparator Fast\ComparatorFast.exe" /STARTUP O4 - HKCU\..\Run: [Free Download Manager] "C:\Program Files\Free Download Manager\fdm.exe" -autorun O8 - Extra context menu item: Download all with Free Download Manager - file://C:\Program Files\Free Download Manager\dlall.htm O8 - Extra context menu item: Download selected with Free Download Manager - file://C:\Program Files\Free Download Manager\dlselected.htm O8 - Extra context menu item: Download video with Free Download Manager - file://C:\Program Files\Free Download Manager\dlfvideo.htm O8 - Extra context menu item: Download with Free Download Manager - file://C:\Program Files\Free Download Manager\dllink.htm O8 - Extra context menu item: E&xport to Microsoft Excel - res://C:\PROGRA~1\MICROS~2\Office12\EXCEL.EXE/3000 O9 - Extra button: @C:\Program Files\Hewlett-Packard\HP Support Framework\Resources\HPNetworkCheck\HPNetworkCheckPlugin.dll,-103 - {25510184-5A38-4A99-B273-DCA8EEF6CD08} - C:\Program Files\Hewlett-Packard\HP Support Framework\Resources\HPNetworkCheck\NCLauncherFromIE.exe O9 - Extra 'Tools' menuitem: @C:\Program Files\Hewlett-Packard\HP Support Framework\Resources\HPNetworkCheck\HPNetworkCheckPlugin.dll,-102 - {25510184-5A38-4A99-B273-DCA8EEF6CD08} - C:\Program Files\Hewlett-Packard\HP Support Framework\Resources\HPNetworkCheck\NCLauncherFromIE.exe O9 - Extra button: Research - {92780B25-18CC-41C8-B9BE-3C9C571A8263} - C:\PROGRA~1\MICROS~2\Office12\REFIEBAR.DLL O9 - Extra button: Change your facebook look - {AAA38851-3CFF-475F-B5E0-720D3645E4A5} - C:\Program Files\Minibar\MinibarButton.dll O10 - Unknown file in Winsock LSP: c:\windows\system32\nlaapi.dll O10 - Unknown file in Winsock LSP: c:\windows\system32\napinsp.dll O10 - Unknown file in Winsock LSP: c:\program files\common files\microsoft shared\windows live\wlidnsp.dll O10 - Unknown file in Winsock LSP: c:\program files\common files\microsoft shared\windows live\wlidnsp.dll O11 - Options group: [ACCELERATED_GRAPHICS] Accelerated graphics O11 - Options group: [INTERNATIONAL] International O13 - Gopher Prefix: O17 - HKLM\System\CCS\Services\Tcpip\..\{920289D7-5F75-4181-9A37-5627EAA163E3}: NameServer = 8.8.8.8,8.8.4.4 O17 - HKLM\System\CCS\Services\Tcpip\..\{AE83ED2F-EF14-4066-ACE2-C4ED07A68EAA}: NameServer = 9.9.9.9,8.8.8.8 O18 - Protocol: ms-help - {314111C7-A502-11D2-BBCA-00C04F8EC294} - C:\Program Files\Common Files\Microsoft Shared\Help\hxds.dll O18 - Protocol: skype4com - {FFC8B962-9B40-4DFF-9458-1830C7DD7F5D} - C:\PROGRA~1\COMMON~1\Skype\SKYPE4~1.DLL O18 - Protocol: wlpg - {E43EF6CD-A37A-4A9B-9E6F-83F89B8E6324} - C:\Program Files\Windows Live\Photo Gallery\AlbumDownloadProtocolHandler.dll O18 - Filter hijack: text/xml - {807563E5-5146-11D5-A672-00B0D022E945} - C:\PROGRA~1\COMMON~1\MICROS~1\OFFICE12\MSOXMLMF.DLL O20 - AppInit_DLLs: c:\progra~2\browse~1\23787~1.43\{16cdf~1\browse~1.dll c:\progra~2\browse~1\22630~1.40\{16cdf~1\browse~1.dll O20 - Winlogon Notify: igfxcui - C:\Windows\SYSTEM32\igfxdev.dll O23 - Service: avast! Antivirus - AVAST Software - C:\Program Files\AVAST Software\Avast\AvastSvc.exe O23 - Service: Google Software Updater (gusvc) - Google - C:\Program Files\Google\Common\Google Updater\GoogleUpdaterService.exe O23 - Service: HP Support Assistant Service - Hewlett-Packard Company - C:\Program Files\Hewlett-Packard\HP Support Framework\hpsa_service.exe O23 - Service: HP Quick Synchronization Service (HPDrvMntSvc.exe) - Hewlett-Packard Company - C:\Program Files\Hewlett-Packard\Shared\HPDrvMntSvc.exe O23 - Service: HP Software Framework Service (hpqwmiex) - Hewlett-Packard Company - C:\Program Files\Hewlett-Packard\Shared\hpqWmiEx.exe O23 - Service: HP Service (hpsrv) - Hewlett-Packard Company - C:\Windows\system32\Hpservice.exe O23 - Service: Intel(R) Management and Security Application Local Management Service (LMS) - Intel Corporation - C:\Program Files\Intel\Intel(R) Management Engine Components\LMS\LMS.exe O23 - Service: Mozilla Maintenance Service (MozillaMaintenance) - Mozilla Foundation - C:\Program Files\Mozilla Maintenance Service\maintenanceservice.exe O23 - Service: @%SystemRoot%\system32\qwave.dll,-1 (QWAVE) - Unknown owner - %windir%\system32\svchost.exe (file missing) O23 - Service: @%SystemRoot%\system32\seclogon.dll,-7001 (seclogon) - Unknown owner - %windir%\system32\svchost.exe (file missing) O23 - Service: Skype Updater (SkypeUpdate) - Skype Technologies - C:\Program Files\Skype\Updater\Updater.exe O23 - Service: Adobe SwitchBoard (SwitchBoard) - Adobe Systems Incorporated - C:\Program Files\Common Files\Adobe\SwitchBoard\SwitchBoard.exe O23 - Service: Intel(R) Management & Security Application User Notification Service (UNS) - Intel Corporation - C:\Program Files\Intel\Intel(R) Management Engine Components\UNS\UNS.exe O23 - Service: @%PROGRAMFILES%\Windows Media Player\wmpnetwk.exe,-101 (WMPNetworkSvc) - Unknown owner - %PROGRAMFILES%\Windows Media Player\wmpnetwk.exe (file missing)

    Read the article

  • Installing Windows on HP Proliant Servers without SmartStart

    - by Fitzroy
    I have a PXE server for deploying Windows XP and Windows 7 to workstations. The process is as follows: Boot the workstation from the NIC. Workstation sends a DHCP request. DHCP server responds with an IP address and the location of the PXE server. Workstation downloads WinPE image file from PXE server via TFTP Workstation stores WinPE image file in memory and executes it. Once booted into WinPE, I connect to a network share to gain access to either the Windows XP or Windows 7 installation files. A custom script is launched to guide you through the process of formatting and partitioning the hard drive(s) (using DISKPART and FORMAT). Another custom script asks for details such as the hostname to assign to the workstation. The answers provided are used to build an unattended answer file (SIF [Setup Information File] for WinXP and XML for Win7). The Windows setup EXE is launched, passing the unattended answer file to it as a parameter. The Windows XP and Windows 7 installation sources have been customised to include the drivers for our Dell workstations. They also run a number of scripts upon first booting up to install software packages. This process works very well for our workstations and I would now like to use it for building our servers too. The vast majority of our servers are HP Proliant DL360 G6, DL380 G5 and DL380 G6. They’re running Windows Server 2003 (various editions) or 2008 (various editions). To date, we have always built the HP Proliant servers using the SmartStart CD provided. SmartStart does three useful things for us: Setup RAID with HP Array Configuration Utility (ACU). Installs and configures SNMP Installs various HP Tools for Windows (HP Array Configuration Utility, HP Array Diagnostic Utility, HP Proliant Integrated Management Log Viewer, etc) Using SmartStart I have never had to manually download and install Windows drivers for network, sound, video, etc. I'm not sure if this is because SmartStart copies drivers from the CD during setup, or whether Windows just has the drivers natively in its driver CAB. If I abandon the SmartStart CD in favour of my PXE server I would have to do the following: As I wont have access to ACU, I'll configure the RAID (before booting to the PXE server) by pressing F8 (during the boot process) to access Option ROM Configuration for Arrays (ORCA). Installation of SNMP and the HP Tools will have to be installed once the Windows installation is complete using the Proliant Support Pack. Is this method OK? Is there anything that the SmartStart CD does that I'll be unable to do by other means? Are there any disadvantages to not using the SmartStart CD? Many thanks. UPDATE 05/01/12 I’ve been reading through the SmartStart Scripting Toolkit documentation. The scripting toolkit contains command line tools which work within WinPE and can such things as configure BIOS settings, configure an array and setup ILO. I’m personally not too bothered about configuring BIOS settings as I rarely deviate from the defaults (unless the server is to be a Hyper-V host). I’m not too fussed about being able to configure the array from within WinPE, as I’m happy to just press F8 and use Option ROM Configuration for Arrays (ORCA). Although, if it’s easy enough to do, I will explore this further, as it saves time if everything can be configured from within WinPE. One of the nice features all the tools possess is that you can pass input files to them. EG. Configure one server to your requirements, capture its configuration to a file (using the appropriate tool), you can then use the tool on other servers passing the input file with the captured configuration. Array controller drivers appear to be included with the toolkit along with example of how to incorporate them within a WinPE build. I suppose WinPE won’t be able to see logical volumes (I.E 2x physical disks in a RAID 1 configuration) without the array controller drivers? I mentioned in my post that SmartStart normally installs a bunch of Windows HP tools for you. I’ve had a look today, and if you run the SmartStart CD from within Windows all the tools can be installed. Therefore I can do this after the Windows installation is complete. The SmartStart CD appears to contain a lot Windows drivers. I can customise my Windows 2008 source to incorporate these drivers. However, I understand that incorporating an array controller driver is a little different to most drivers. I believe that you have to provide the driver during the very early stages of the Windows setup. I’m working through the Scripting Toolkit documentation to try and work this out...

    Read the article

  • Apache on Win32: Slow Transfers of single, static files in HTTP, fast in HTTPS

    - by Michael Lackner
    I have a weird problem with Apache 2.2.15 on Windows 2000 Server SP4. Basically, I am trying to serve larger static files, images, videos etc. The download seems to be capped at around 550kB/s even over 100Mbit LAN. I tried other protocols (FTP/FTPS/FTP+ES/SCP/SMB), and they are all in the multi-megabyte range. The strangest thing is that, when using Apache with HTTPS instead of HTTP, it serves very fast, around 2.7MByte/s! I also tried the AnalogX SimpleWWW server just to test the plain HTTP speed of it, and it gave me a healthy 3.3Mbyte/s. I am at a total loss here. I searched the web, and tried to change the following Apache configuration directives in httpd.conf, one at a time, mostly to no avail at all: SendBufferSize 1048576 #(tried multiples of that too, up to 100Mbytes) EnableSendfile Off #(minor performance boost) EnableMMAP Off Win32DisableAcceptEx HostnameLookups Off #(default) I also tried to tune the following registry parameters, setting their values to 4194304 in decimal (they are REG_DWORD), and rebooting afterwards: HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultReceiveWindow HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultSendWindow Additionally, I tried to install mod_bw, which sets the event timer precision to 1ms, and allows for bandwidth throttling. According to some people it boosts static file serving performance when set to unlimited bandwidth for everybody. Unfortunately, it did nothing for me. So: AnalogX HTTP: 3300kB/s Gene6 FTPD, plain: 3500kB/s Gene6 FTPD, Implicit and Explicit SSL, AES256 Cipher: 1800-2000kB/s freeSSHD: 1100kB/s SMB shared folder: about 3000kB/s Apache HTTP, plain: 550kB/s Apache HTTPS: 2700kB/s Clients that were used in the bandwidth testing: Internet Explorer 8 (HTTP, HTTPS) Firefox 8 (HTTP, HTTPS) Chrome 13 (HTTP, HTTPS) Opera 11.60 (HTTP, HTTPS) wget under CygWin (HTTP, HTTPS) FileZilla (FTP, FTPS, FTP+ES, SFTP) Windows Explorer (SMB) Generally, transfer speeds are not too high, but that's because the server machine is an old quad Pentium Pro 200MHz machine with 2GB RAM. However, I would like Apache to serve at at least 2Mbyte/s instead of 550kB/s, and that already works with HTTPS easily, so I fail to see why plain HTTP is so crippled. I am using a Kerio Winroute Firewall, but no Throttling and no special filters peeking into HTTP traffic, just the plain Firewall functionality for blocking/allowing connections. The Apache error.log (Loglevel info) shows no warnings, no errors. Also nothing strange to be seen in access.log. I have already stripped down my httpd.conf to the bare minimum just to make sure nothing is interfering, but that didn't help either. If you have any idea, help would be greatly appreciated, since I am totally out of ideas! Thanks! Edit: I have now tried a newer Apache 2.2.21 to see if it makes any difference. However, the behaviour is exactly the same. Edit 2: KM01 has requested a sniff on the HTTP headers, so here comes the LiveHTTPHeaders output (an extension to Firefox). The Output is generated on downloading a single file called "elephantsdream_source.264", which is an H.264/AVC elementary video stream under an Open Source license. I have taken the freedom to edit the URL, removing folders and changing the actual servers domain name to www.mydomain.com. Here it is: LiveHTTPHeaders, Plain HTTP: http://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:55:16 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain LiveHTTPHeaders, HTTPS: https://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:56:57 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain

    Read the article

  • Creating a multi-tenant application using PostgreSQL's schemas and Rails

    - by ramon.tayag
    Stuff I've already figured out I'm learning how to create a multi-tenant application in Rails that serves data from different schemas based on what domain or subdomain is used to view the application. I already have a few concerns answered: How can you get subdomain-fu to work with domains as well? Here's someone that asked the same question which leads you to this blog. What database, and how will it be structured? Here's an excellent talk by Guy Naor, and good question about PostgreSQL and schemas. I already know my schemas will all have the same structure. They will differ in the data they hold. So, how can you run migrations for all schemas? Here's an answer. Those three points cover a lot of the general stuff I need to know. However, in the next steps I seem to have many ways of implementing things. I'm hoping that there's a better, easier way. Finally, to my question When a new user signs up, I can easily create the schema. However, what would be the best and easiest way to load the structure that the rest of the schemas already have? Here are some questions/scenarios that might give you a better idea. Should I pass it on to a shell script that dumps the public schema into a temporary one, and imports it back to my main database (pretty much like what Guy Naor says in his video)? Here's a quick summary/script I got from the helpful #postgres on freenode. While this will probably work, I'm gonna have to do a lot of stuff outside of Rails, which makes me a bit uncomfortable.. which also brings me to the next question. Is there a way to do this straight from Ruby on Rails? Like create a PostgreSQL schema, then just load the Rails database schema (schema.rb - I know, it's confusing) into that PostgreSQL schema. Is there a gem/plugin that has these things already? Methods like "create_pg_schema_and_load_rails_schema(the_new_schema_name)". If there's none, I'll probably work at making one, but I'm doubtful about how well tested it'll be with all the moving parts (especially if I end up using a shell script to create and manage new PostgreSQL schemas). Thanks, and I hope that wasn't too long! UPDATE May 11, 2010 11:26 GMT+8 Since last night I've been able to get a method to work that creates a new schema and loads schema.rb into it. Not sure if what I'm doing is correct (seems to work fine, so far) but it's a step closer at least. If there's a better way please let me know. module SchemaUtils def self.add_schema_to_path(schema) conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{schema}, #{conn.schema_search_path}" end def self.reset_search_path conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{conn.schema_search_path}" end def self.create_and_migrate_schema(schema_name) conn = ActiveRecord::Base.connection schemas = conn.select_values("select * from pg_namespace where nspname != 'information_schema' AND nspname NOT LIKE 'pg%'") if schemas.include?(schema_name) tables = conn.tables Rails.logger.info "#{schema_name} exists already with these tables #{tables.inspect}" else Rails.logger.info "About to create #{schema_name}" conn.execute "create schema #{schema_name}" end # Save the old search path so we can set it back at the end of this method old_search_path = conn.schema_search_path # Tried to set the search path like in the methods above (from Guy Naor) # conn.execute "SET search_path TO #{schema_name}" # But the connection itself seems to remember the old search path. # If set this way, it works. conn.schema_search_path = schema_name # Directly from databases.rake. # In Rails 2.3.5 databases.rake can be found in railties/lib/tasks/databases.rake file = "#{Rails.root}/db/schema.rb" if File.exists?(file) Rails.logger.info "About to load the schema #{file}" load(file) else abort %{#{file} doesn't exist yet. It's possible that you just ran a migration!} end Rails.logger.info "About to set search path back to #{old_search_path}." conn.schema_search_path = old_search_path end end

    Read the article

  • ASP.NET MVC 2 Mdel encapsulated within ViewModel Validation

    - by Program.X
    I am trying to get validation to work in ASP.NET MVC 2, but without much success. I have a complex class containing a large number of fields. (Don't ask - this is oneo f those real-world situations best practices can't touch) This would normally be my Model and is a LINQ-to-SQL generated class. Because this is generated code, I have created a MetaData class as per http://davidhayden.com/blog/dave/archive/2009/08/10/AspNetMvc20BuddyClassesMetadataType.aspx. public class ConsultantRegistrationMetadata { [DisplayName("Title")] [Required(ErrorMessage = "Title is required")] [StringLength(10, ErrorMessage = "Title cannot contain more than 10 characters")] string Title { get; set; } [Required(ErrorMessage = "Forename(s) is required")] [StringLength(128, ErrorMessage = "Forename(s) cannot contain more than 128 characters")] [DisplayName("Forename(s)")] string Forenames { get; set; } // ... I've attached this to the partial class of my generated class: [MetadataType(typeof(ConsultantRegistrationMetadata))] public partial class ConsultantRegistration { // ... Because my form is complex, it has a number of dependencies, such as SelectLists, etc. which I have encapsulated in a ViewModel pattern - and included the ConsultantRegistration model as a property: public class ConsultantRegistrationFormViewModel { public Data.ConsultantRegistration ConsultantRegistration { get; private set; } public SelectList Titles { get; private set; } public SelectList Countries { get; private set; } // ... So it is essentially ViewModel=Model My View then has: <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Title) %> <%: Html.DropDownListFor(model => model.ConsultantRegistration.Title, Model.Titles,"(select a Title)") %> <%: Html.ValidationMessage("Title","*") %> </p> <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.TextBoxFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.ValidationMessageFor(model=>model.ConsultantRegistration.Forenames) %> </p> The problem is, the validation attributes on the metadata class are having no effect. I tried doing it via an Interface, but also no effect. I'm beginning to think that the reason is because I am encapsulating my model within a ViewModel. My Controller (Create Action) is as follows: [HttpPost] public ActionResult Create(Data.ConsultantRegistration consultantRegistration) { if (ModelState.IsValid) // this is always true - which is wrong!! { try { consultantRegistration = ConsultantRegistrationRepository.SaveConsultantRegistration(consultantRegistration); return RedirectToAction("Edit", new { id = consultantRegistration.ID, sectionIndex = 2 }); } catch (Exception ex) { ModelState.AddModelError("CreateException",ex); } } return View(new ConsultantRegistrationFormViewModel(consultantRegistration)); } As outlined in the comment, the ModelState.IsValid property always returns true, despite fields with the Validaiton annotations not being valid. (Forenames being a key example). Am I missing something obvious - considering I am an MVC newbie? I'm after the mechanism demoed by Jon Galloway at http://www.asp.net/learn/mvc-videos/video-10082.aspx. (Am aware t is similar to http://stackoverflow.com/questions/1260562/asp-net-mvc-model-viewmodel-validation but that post seems to talk about xVal. I have no idea what that is and suspect it is for MVC 1)

    Read the article

  • Scaling MKMapView Annotations relative to the zoom level

    - by Jonathan
    The Problem I'm trying to create a visual radius circle around a annonation, that remains at a fixed size in real terms. Eg. So If i set the radius to 100m, as you zoom out of the Map view the radius circle gets progressively smaller. I've been able to achieve the scaling, however the radius rect/circle seems to "Jitter" away from the Pin Placemark as the user manipulates the view. The Manifestation Here is a video of the behaviour. The Implementation The annotations are added to the Mapview in the usual fashion, and i've used the delegate method on my UIViewController Subclass (MapViewController) to see when the region changes. -(void)mapView:(MKMapView *)pMapView regionDidChangeAnimated:(BOOL)animated{ //Get the map view MKCoordinateRegion region; CGRect rect; //Scale the annotations for( id<MKAnnotation> annotation in [[self mapView] annotations] ){ if( [annotation isKindOfClass: [Location class]] && [annotation conformsToProtocol:@protocol(MKAnnotation)] ){ //Approximately 200 m radius region.span.latitudeDelta = 0.002f; region.span.longitudeDelta = 0.002f; region.center = [annotation coordinate]; rect = [[self mapView] convertRegion:foo toRectToView: self.mapView]; if( [[[self mapView] viewForAnnotation: annotation] respondsToSelector:@selector(setRadiusFrame:)] ){ [[[self mapView] viewForAnnotation: annotation] setRadiusFrame:rect]; } } } The Annotation object (LocationAnnotationView)is a subclass of the MKAnnotationView and it's setRadiusFrame looks like this -(void) setRadiusFrame:(CGRect) rect{ CGPoint centerPoint; //Invert centerPoint.x = (rect.size.width/2) * -1; centerPoint.y = 0 + 55 + ((rect.size.height/2) * -1); rect.origin = centerPoint; [self.radiusView setFrame:rect]; } And finally the radiusView object is a subclass of a UIView, that overrides the drawRect method to draw the translucent circles. setFrame is also over ridden in this UIView subclass, but it only serves to call [UIView setNeedsDisplay] in addition to [UIView setFrame:] to ensure that the view is redrawn after the frame has been updated. The radiusView object's (CircleView) drawRect method looks like this -(void) drawRect:(CGRect)rect{ //NSLog(@"[CircleView drawRect]"); [self setBackgroundColor:[UIColor clearColor]]; //Declarations CGContextRef context; CGMutablePathRef path; //Assignments context = UIGraphicsGetCurrentContext(); path = CGPathCreateMutable(); //Alter the rect so the circle isn't cliped //Calculate the biggest size circle if( rect.size.height > rect.size.width ){ rect.size.height = rect.size.width; } else if( rect.size.height < rect.size.width ){ rect.size.width = rect.size.height; } rect.size.height -= 4; rect.size.width -= 4; rect.origin.x += 2; rect.origin.y += 2; //Create paths CGPathAddEllipseInRect(path, NULL, rect ); //Create colors [[self areaColor] setFill]; CGContextAddPath( context, path); CGContextFillPath( context ); [[self borderColor] setStroke]; CGContextSetLineWidth( context, 2.0f ); CGContextSetLineCap(context, kCGLineCapSquare); CGContextAddPath(context, path ); CGContextStrokePath( context ); CGPathRelease( path ); //CGContextRestoreGState( context ); } Thanks for bearing with me, any help is appreciated. Jonathan

    Read the article

< Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >