Search Results

Search found 8790 results on 352 pages for 'known hosts'.

Page 45/352 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • How to get feedback from the community on large chunks of code?

    - by MainMa
    Code Review.SE is great when you need feedback on a precise, short piece of code. But where to get similar feedback about the code itself when: you have thousands of LOC, don't have colleagues in your workplace ready or willing to review the code¹, don't have thousands of dollars to spend for a professional review by a third party developer?² Places like CodePlex are a good idea to get your project known³, but from what I've seen, the feedback you get on known projects are consumer feedback, i.e. concerns the bugs and feature requests, not the quality of the source code itself. What are the social way to get the community involved in the code review of the codebase of a certain size for an open source project which doesn't have the scale of Firefox or similar products? ¹ Which is the case for most personal and open source projects, or projects done in companies where the practice of regular and complete code review is nonexistent. ² Which is, again, the case for most personal and open source projects. ³ Even if too many projects published on CodePlex never get known, either because nobody cares or because they are presented not very well.

    Read the article

  • Switching mdadm to an external bitmap

    - by Oli
    I've just read this in another post about improving RAID5/6 write speeds: After increasing stripe cache & switching to external bitmap, my speeds are 160 Mb/s writes, 260 Mb/s reads. :-D I've already found out how to increase the stripe cache and this worked pretty well but I'd like to know more about an external bitmap. I have an incredibly fast (540MB/s) RAID0 SSD that would do well if a bitmap does what I think it does but I'm still very unsure. I've only known about them as long as I've known this post. A few questions: What is a bitmap (in terms of mdadm)? What are the advantages of an internal bitmap (over external)? What are the advantages of an external bitmap (over internal)? How do I switch between the two? I should add that while this is a I'm-bored-let's-break-something thread, I do value the data stored on the RAID array. If doing this is going to put data at significant risk, please let me know.

    Read the article

  • Algorithm to match timestamped events from two sources

    - by urza.cc
    I have two different physical devices (one is camera, one is other device) that observe the same scene and mark when a specified event occures. (record timestamp) So they each produce a serie of timestamps "when the event was observed". Theoretically the recorded timestamps should be very well aligned: Visualized ideal situation on two time lines "s" and "r" as recorded from the two devices: but more likely they will not be so nicely aligned and there might be missing events from timeline s or r: I am looking for algorithm to match events from "s" and "r" like this: So that the result will be something like: (s1,null); (s2,r1); (s3,null); (s4,r2); (s5,r3); (null,r4); (s6,r5); Or something similar. Maybe with some "confidence" rating. I have some ideas, but I feel that this might be probably a well known problem, that has some good known solutions, but I don't know the right terminology. I am a little bit out of my element here, this is not my primary area of programming.. Any helps, suggestions etc will be appreciated.

    Read the article

  • How can I configure the embedded wireless card in a Toshiba Satellite Pro 4600 to work under Lubuntu 10.10?

    - by MoLE
    I'm struggling to get the embedded wireless card in this laptop to work. In 7.10 (gutsy) it worked fine. Now I'm trying to get 10.10 (maverick) working on it, and am using the Lubuntu flavour due to the low resources of this laptop. The hardware: Appears to be an embedded pcmcia card. pccardctl ident gives: Socket 0: product info: "TOSHIBA", "Wireless LAN Card", "Version 01.01", "" manfid: 0x0156, 0x0002 function: 6 (network) The default kernel recognises the card and loads the orinoco_cs driver. orinoco_cs 0.0: Hardware identity 0005:0002:0001:0002 orinoco_cs 0.0: Station identity 001f:0001:0006:000e orinoco_cs 0.0: Firmware determined as Lucent/Agere 6.14 Then for some reason, the driver isn't happy with this and gives: orinoco_cs 0.0: Hardware identity 0005:0002:0001:0002 orinoco_cs 0.0: Station identity 001f:0002:0009:0030 orinoco_cs 0.0: Firmware determined as Lucent/Agere 9.48 All seems ok until I try to associate with my access point using Network Manager. eth1: Lucent/Agere firmware doesn't support manual roaming repeated about 10 times then NM gives up. According to the linuxwireless.org wiki page on this driver, this is a known issue, and I quote: Known issues Roaming and WPA_supplicant Lucent/Agere firmware doesn't support manual roaming On the Agere cards, roaming is controlled by the firmware instead of userspace. You will get the above message if userspace attempts to associate with a specific AP rather than by SSID. If you are using wpa_supplicant use ap_scan=2 mode. NetworkManager uses wpa_supplicant, so the above also applies. At this point my google-fu has failed me, and I can't find how to configure network manager to use the mystical "ap_scan=2" mode via wpa_supplicant. I have tried the following suggested solutions (from launchpad or the forums) deleting the agere* files from /lib/firmware using wicd instead of network manager combining both blacklisting the orinoco_cs driver in an attempt to force use of the hostap_cs driver instead (in case it is a prism2 card). Obviously none of them have worked for me. Any hints on how to perform the suggested workaround above? Edit: I have also confirmed working on 8.10 (intrepid) live CD.

    Read the article

  • Choosing an open source license such that maximum value is added to a startup

    - by echo-flow
    There are many companies that produce open source software products, and many business models that these companies can use. I'm particularly interested in companies like 280 North, the company behind Objective-J and Cappucino frameworks. My understanding of this organization's business model is that they: worked to develop a tool which added significant value to developers, released the tool under an open source license, built a community around the tool (which was helped by the project's open source licensing), created interesting demos illustrating the project's value All of these things added value to the project, and the company that owned it. Finally, 280 North was sold to Motorola. My question has to do with the role of software licensing in this particular business model. 280 North licensed their software projects under the LGPL, which gave them some proprietary control over how the project could be used. I believe that the LGPL is what's known as a "weak copyleft" license, meaning that the project can be linked to, without the linking code also being licensed under the LGPL; but software derived directly from the project would need to be licensed under the LGPL. For web-oriented libraries in particular, weak copyleft, or non-copyleft licensing seems to be quite common; I can't think of a single example of a popular or well-known web-oriented library that is licensed under the GPL (or AGPL). The question then, is, how much value would a weak copyleft license like the LGPL add to a software venture like 280 North, versus a non-copyleft license, such as the BSD license or the Apache Software License? I'd really appreciate any insight anyone can offer into this, but I'd be most interested in answers that can cite other companies as case studies or examples.

    Read the article

  • Testing my model for hybrid scheduling in Embedded Systems

    - by markusian
    I am working on a project for school, where I have to analyze the performances of a few fixed-priority servers algorithms (polling server, deferrable server, priority exchange) using a simulator in the case of hybrid scheduling, where we have both hard periodic tasks and soft aperiodic tasks. In my model I consider that: the hard tasks have a period equal to their deadline, with a known worst case execution time (wcet). The actual execution time could be smaller than the wcet. the soft tasks have a known wcet and random interarrival times. The actual execution time could be smaller than the wcet. In order to test those algorithms I need realistic case studies. For this reason I'm digging in the scientific literature but I am facing different problems: Sometimes I find a list of hard tasks with wcet, but it is not specified how the soft tasks parameters are found. Given the wcet of a task, how can I model its actual execution time? This means, what random distribution should I use considering the wcet? How can I model the random interarrival times of soft aperiodic tasks?

    Read the article

  • Would this data requirement suit a Document -Oriented database?

    - by codecowboy
    I have a requirement to allow users to fill in journal/diary entries per day. I want to provide a handful of known journal templates with x columns to fill in. An example might be a thought diary; a user has to record a thought in one column, describe the situation, rate how they felt etc. The other requirement is that a user should be able to create their own diary templates. They might have a need for a 10 column diary entry per day and might need to rate some aspect out of 50 instead of 10. In an RDBMS, I can see this getting quite complicated. I could have individual tables for my known templates as the fields will be fixed. But for custom diary templates I imagine I would would need a table storing custom_field_types (the diary columns), a table storing entries referencing their field types (custom_entries) and then a third custom_diary table which would store rows matching custom_entries to diaries. Leaving performance / scaling aside, would it be any simpler or make more sense to use a document oriented database like MongoDB to store this data? This is for a web application which might later need an API for mobile devices.

    Read the article

  • What are the design principles that promote testable code? (designing testable code vs driving design through tests)

    - by bot
    Most of the projects that I work on consider development and unit testing in isolation which makes writing unit tests at a later instance a nightmare. My objective is to keep testing in mind during the high level and low level design phases itself. I want to know if there are any well defined design principles that promote testable code. One such principle that I have come to understand recently is Dependency Inversion through Dependency injection and Inversion of Control. I have read that there is something known as SOLID. I want to understand if following the SOLID principles indirectly results in code that is easily testable? If not, are there any well-defined design principles that promote testable code? I am aware that there is something known as Test Driven Development. Although, I am more interested in designing code with testing in mind during the design phase itself rather than driving design through tests. I hope this makes sense. One more question related to this topic is whether it's alright to re-factor an existing product/project and make changes to code and design for the purpose of being able to write a unit test case for each module?

    Read the article

  • apache2 namevirtualhost resolving wrong site

    - by joe
    Running apache 2.2.6. I'm setting up a development environment. dev and production will be hosted on the same machine, same IP address. DNS entries like prod.domain.com and dev.domain.com point to the same IP. * Imprortant: it is required that dev and prod are otherwise completely separate. Each will run it's own apache instance. Each will use it's own apache configuration. Each, prod and dev, will host http and https. I have this set up and working, but not as restrictive as I'd like. For instance, the production config: NameVirtualHost *:80 NameVirtualHost *:443 <VirtualHost *:80 > ServerName prod.domain.com # ... etc </VirtualHost> <VirtualHost *:443 > ServerName prod.domain.com # ... etc </VirtualHost> The dev site is set up similarly, using ports 8080 and 4443. Each site works fine. But assuming both apaches are running, one can also hit "cross-site" by mistake. So, inadvertently hitting prod.domain.com:8080 successfully returns a page from the dev site. It would be much better if this failed completely. This is a bit more difficult to solve (for me) because of the need for two apache configs. If all in one, the single process would have full knowledge of everything. So, I tried to solve this with brute force, including virtual hosts for the "other" site, with something that would fail, like no access to documentroot. But apache then inexplicably finds the "wrong" virtual host. Here's the full config for production, with the dummy dev configs. NameVirtualHost *:80 NameVirtualHost *:443 # ---------------------------------------------- # DUMMY HOSTS <VirtualHost *:8080 > ServerName dev.domain.com:8080 DocumentRoot /tmp/ <Directory /tmp/ > Order deny,allow Deny from all </Directory> </VirtualHost> <VirtualHost *:4443 > ServerName dev.domain.com:4443 DocumentRoot /tmp/ <Directory /tmp/ > Order deny,allow Deny from all </Directory> </VirtualHost> # ---------------------------------------------- # REAL PRODUCTION HOSTS <VirtualHost *:80 > ServerName prod.domain.com:80 DocumentRoot /something/valid/ <Directory /something/valid/> Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:443 > ServerName prod.domain.com:443 DocumentRoot /something/valid/ <Directory /something/valid/> Order allow,deny Allow from all </Directory> # .... other valid ssl setup </VirtualHost> Here's the strange thing. With this configuration, a prod.domain.com:80 hit succeeds. But a prod.domain.com:443 hit fails, because it finds the dev.domain.com:4443 instead. I've also tried removing the port from the ServerName, but it still doesn't work. Sorry for the long question. Hopefully this is enough information. Thanks in advance for any help.

    Read the article

  • Openswan ipsec transport tunnel not going up

    - by gparent
    On ClusterA and B I have installed the "openswan" package on Debian Squeeze. ClusterA ip is 172.16.0.107, B is 172.16.0.108 When they ping one another, it does not reach the destination. /etc/ipsec.conf: version 2.0 # conforms to second version of ipsec.conf specification config setup protostack=netkey oe=off conn L2TP-PSK-CLUSTER type=transport left=172.16.0.107 right=172.16.0.108 auto=start ike=aes128-sha1-modp2048 authby=secret compress=yes /etc/ipsec.secrets: 172.16.0.107 172.16.0.108 : PSK "L2TPKEY" 172.16.0.108 172.16.0.107 : PSK "L2TPKEY" Here is the result of ipsec verify on both machines: root@cluster2:~# ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.28/K2.6.32-5-amd64 (netkey) Checking for IPsec support in kernel [OK] NETKEY detected, testing for disabled ICMP send_redirects [OK] NETKEY detected, testing for disabled ICMP accept_redirects [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [FAILED] Checking for 'ip' command [OK] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] root@cluster2:~# This is the end of the output of ipsec auto --status: 000 "cluster": 172.16.0.108<172.16.0.108>[+S=C]...172.16.0.107<172.16.0.107>[+S=C]; prospective erouted; eroute owner: #0 000 "cluster": myip=unset; hisip=unset; 000 "cluster": ike_life: 3600s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0 000 "cluster": policy: PSK+ENCRYPT+COMPRESS+PFS+UP+IKEv2ALLOW+lKOD+rKOD; prio: 32,32; interface: eth0; 000 "cluster": newest ISAKMP SA: #1; newest IPsec SA: #0; 000 "cluster": IKE algorithm newest: AES_CBC_128-SHA1-MODP2048 000 000 #3: "cluster":500 STATE_QUICK_R0 (expecting QI1); EVENT_CRYPTO_FAILED in 298s; lastdpd=-1s(seq in:0 out:0); idle; import:admin initiate 000 #2: "cluster":500 STATE_QUICK_I1 (sent QI1, expecting QR1); EVENT_RETRANSMIT in 13s; lastdpd=-1s(seq in:0 out:0); idle; import:admin initiate 000 #1: "cluster":500 STATE_MAIN_I4 (ISAKMP SA established); EVENT_SA_REPLACE in 2991s; newest ISAKMP; lastdpd=-1s(seq in:0 out:0); idle; import:admin initiate 000 Interestingly enough, if I do ike-scan on the server here's what happens: Doesn't seem to take my ike settings into account root@cluster1:~# ike-scan -M 172.16.0.108 Starting ike-scan 1.9 with 1 hosts (http://www.nta-monitor.com/tools/ike-scan/) 172.16.0.108 Main Mode Handshake returned HDR=(CKY-R=641bffa66ba717b6) SA=(Enc=3DES Hash=SHA1 Auth=PSK Group=2:modp1024 LifeType=Seconds LifeDuration(4)=0x00007080) VID=4f45517b4f7f6e657a7b4351 VID=afcad71368a1f1c96b8696fc77570100 (Dead Peer Detection v1.0) Ending ike-scan 1.9: 1 hosts scanned in 0.008 seconds (118.19 hosts/sec). 1 returned handshake; 0 returned notify root@cluster1:~# I can't tell what's going on here, this is pretty much the simplest config I can have according to the examples.

    Read the article

  • Setting up a DNS name server for a mass virtual host with Bind9

    - by Dez
    I am trying to set up a chrooted DNS name server in a local LAN like this everyone connected in the LAN can have access to the mass virtual hosts defined for a development ambience without having to edit manually their local /etc/hosts one by one. The mass virtual host is named example.user.dev (VirtualDocumentRoot /home/user/example ) and example.test (DocumentRoot /var/www/example). I set up everything and the /var/log/syslog doesn't show any error, but when checking the DNS with: host -v example.test Doesn't find the host. Also using the dig command I don't receive answer. dig -x example.test ; << DiG 9.5.1-P3 << -x imprimere ;; global options: printcmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 47844 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;imprimere.in-addr.arpa. IN PTR ;; AUTHORITY SECTION: in-addr.arpa. 600 IN SOA a.root-servers.net. dns-ops.arin.net. 2010042604 1800 900 691200 10800 ;; Query time: 108 msec ;; SERVER: 80.58.0.33#53(80.58.0.33) ;; WHEN: Mon Apr 26 11:15:53 2010 ;; MSG SIZE rcvd: 107 My configuration is the following: /etc/bind/named.conf.local zone "example.test" { type master; allow-query { any; }; file "/etc/bind/zones/master_example.test"; notify yes; }; zone "1.168.192.in-addr.arpa" { type master; allow-query { any; }; file "/etc/bind/zones/master_1.168.192.in-addr.arpa"; notify yes; }; /etc/bind/named.conf.options Note: We have an static IP address so I forward the querys to DNS server to said IP address. options{ directory "/var/cache/bind"; forwarders { 80.34.100.160; }; auth-nxdomain no; listen-on-v6 { any; }; }; /etc/bind/zones/master_example.test $ORIGIN example.test. $TTL 86400 @ IN SOA example.test. root.example.test. ( 201004227 ; serial 28800 ; refresh 14400 ; retry 3600000 ; expire 86400 ) ; min ; TXT "example.test, DNS service" @ IN NS example.test. localhost A 127.0.0.1 example.test. A 192.168.1.52 example A 192.168.1.52 www CNAME example.test. /etc/hosts 127.0.0.1 localhost example 192.168.1.52 localhost example example.test /etc/resolv.conf Note: For Bind I just added the 3 last lines. nameserver 80.58.0.33 nameserver 80.58.61.250 nameserver 80.58.61.254 search example.test search example nameserver 192.168.1.52

    Read the article

  • IBM Server Config questions

    - by Joel Coel
    I have a few questions on a potential server setup. First, the situation: Last year we bought an IBM x3500 server with 2 Xeon E5410's, 9GB RAM, 6 HDDs. The original intent for this server was to replace the old exchange e-mail server. It was brought in, set up, and then shortly after we switched to gmail. Shortly after that my predecessor left for greener pastures, and finally I was hired. So this nice server is now sitting (mostly) idle. This year I have budget again for one server, and of course I want to put this other server to work. I'm thinking about the best use for the two server, and I think I finally have a plan for what I want to do with them. The idea is to use the two newer servers as a pair of VM hosts. I will set up each server with the same 8 VMs, but divide up the load so that only 4 are active per physical host. That means I've normally got 2GB RAM + 2 cores per host. I've done some load testing to pick out what servers to convert to virtual, and chose them so that each host will be capable of handling the entire set of 8 by itself in a pinch with 1 core and 1GB RAM, but would be very taxed to do so. This should take our data center from 13 total servers down to 7. The "servers" I'm replacing are mostly re-purposed desktops, so I'm more than happy to be able to do this. Now it's time to go shopping for the new server. I'd like my two hosts to match as closely as possible, and so I'm looking at IBM again. It also helps that we have some educational matching grant money from IBM that I need to use to help pay for this system (we're a small private college). So finally, (if you're not bored already), we come to my questions: Am I missing anything big or obvious in this plan? I'm a little worried about network performance since the VM hosts will only have 4 nics total where 8 used to be, but I don't think it will be a problem. Is there anything else like this I might be overlooking? Am I making it even too complicated? IBM no longer has a good analog to last year's server. If I want to match the performance (8 cores, 9GB RAM, 1333mhz front side bus, 6 spindles), I have to spend quite a bit more than we paid last year: $2K+, or nearly a 33% cost increase. This only brings a marginal increase in performance. The alternative to stay in budget is to take a hit on the fsb down to 800mhz or cut the number of cores in half, neither of which is attractive. The main cost culprit is the processor. IBM no longer offers the E5410. It's listed as a part, but not available in any of the server configs I've looked at. I'm considering getting the cheapest 800mhz fsb dual core xeon I can configure and then buying the E5410's separately. That's still an extra $350 I wasn't counting on, but that's better than $2K. I want to know what others think of this - will it work or will I end up with the wrong motherboard or some other issue? Am I missing a simple way to configure the server I really want? I don't really intend to do this, but one option to save some money back is to omit the redundant power supply. Since my redundancy plan for these system is to switch over to a completely different host, the extra power isn't fully necessary. That said, it's still very helpful to avoid even short downtimes while I switch over VMs. Has anyone done this?

    Read the article

  • Samba with remote LDAP authentication doesn`t see users properly

    - by LucasBr
    I'm trying to setup a samba server authenticated by a remote LDAP server, and I'm having some problems that I can't figure how to solve. I was able to make an getent passwd at samba server and I could see all users at ldapserver, but when I tried to access \\SAMBASERVER at my windows box I had this at the /var/log/samba/log.mywindowsbox: <...snip...> [2012/10/19 13:05:22.449684, 2] smbd/sesssetup.c:1413(setup_new_vc_session) setup_new_vc_session: New VC == 0, if NT4.x compatible we would close all old resources. [2012/10/19 13:05:22.449692, 3] smbd/sesssetup.c:1212(reply_sesssetup_and_X_spnego) Doing spnego session setup [2012/10/19 13:05:22.449701, 3] smbd/sesssetup.c:1254(reply_sesssetup_and_X_spnego) NativeOS=[] NativeLanMan=[] PrimaryDomain=[] [2012/10/19 13:05:22.449717, 3] libsmb/ntlmssp.c:747(ntlmssp_server_auth) Got user=[lucas] domain=[BUSINESS] workstation=[MYWINDOWSBOX] len1=24 len2=24 [2012/10/19 13:05:22.449747, 3] auth/auth.c:216(check_ntlm_password) check_ntlm_password: Checking password for unmapped user [BUSINESS]\[lucas]@[MYWINDOWSBOX] with the new password interface [2012/10/19 13:05:22.449759, 3] auth/auth.c:219(check_ntlm_password) check_ntlm_password: mapped user is: [SAMBASERVER]\[lucas]@[MYWINDOWSBOX] [2012/10/19 13:05:22.449773, 3] smbd/sec_ctx.c:210(push_sec_ctx) push_sec_ctx(0, 0) : sec_ctx_stack_ndx = 1 [2012/10/19 13:05:22.449783, 3] smbd/uid.c:429(push_conn_ctx) push_conn_ctx(0) : conn_ctx_stack_ndx = 0 [2012/10/19 13:05:22.449791, 3] smbd/sec_ctx.c:310(set_sec_ctx) setting sec ctx (0, 0) - sec_ctx_stack_ndx = 1 [2012/10/19 13:05:22.449922, 2] lib/smbldap.c:950(smbldap_open_connection) smbldap_open_connection: connection opened [2012/10/19 13:05:23.001517, 3] lib/smbldap.c:1166(smbldap_connect_system) ldap_connect_system: successful connection to the LDAP server [2012/10/19 13:05:23.007713, 3] smbd/sec_ctx.c:418(pop_sec_ctx) pop_sec_ctx (0, 0) - sec_ctx_stack_ndx = 0 [2012/10/19 13:05:23.007733, 3] auth/auth_sam.c:399(check_sam_security) check_sam_security: Couldn't find user 'lucas' in passdb. [2012/10/19 13:05:23.007743, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [lucas] -> [lucas] FAILED with error NT_STATUS_NO_SUCH_USER [2012/10/19 13:05:23.007760, 3] smbd/error.c:80(error_packet_set) error packet at smbd/sesssetup.c(111) cmd=115 (SMBsesssetupX) NT_STATUS_LOGON_FAILURE [2012/10/19 13:05:23.010469, 3] smbd/process.c:1489(process_smb) Transaction 3 of length 142 (0 toread) <...snip...> /etc/samba/smb.conf file follows: [global] dos charset = 850 unix charset = LOCALE workgroup = BUSINESS netbios name = SAMBASERVER bind interfaces only = true interfaces = lo eth0 eth1 smb ports = 139 hosts deny = All hosts allow = 192.168.78. 192.168.255. 127.0.0.1 10.149.122. 192.168.0. name resolve order = wins bcast hosts log level = 3 syslog = 0 log file = /var/log/samba/log.%m max log size = 100000 domain logons = No wins support = Yes wins proxy = No client ntlmv2 auth = Yes lanman auth = Yes ntlm auth = Yes dns proxy = Yes time server = Yes security = user encrypt passwords = Yes obey pam restrictions = Yes ldap password sync = Yes unix password sync = Yes passdb backend = ldapsam:"ldap://192.168.78.206" ldap ssl = off ldap admin dn = uid=root,ou=Users,dc=business,dc=intranet ldap suffix = ldap group suffix = ou=Groups ldap user suffix = ou=Users ldap machine suffix = ou=Computers ldap idmap suffix = ou=Idmap ldap delete dn = Yes add user script = /usr/sbin/smbldap-useradd -m "%u" delete user script = /usr/sbin/smbldap-userdel "%u" add group script = /usr/sbin/smbldap-groupadd -p "%g" delete group script = /usr/sbin/smbldap-groupdel "%g" add user to group script = /usr/sbin/smbldap-groupmod -m "%u" "%g" delete user from group script = /usr/sbin/smbldap-groupmod -x "%u" "%g" set primary group script = /usr/sbin/smbldap-usermod -g "%g" "%u" add machine script = /usr/sbin/smbldap-useradd -W -t5 "%u" idmap backend = ldap:"ldap://192.168.78.206" idmap uid = 16777216-33554431 idmap gid = 16777216-33554431 load printers = No printcap name = /dev/null map acl inherit = Yes map untrusted to domain = Yes enable privileges = Yes veto files = /lost+found/ /publicftp/ So, \\SAMBASERVER says he couldn't find my user, but I can see it by getent passwd . What I can do in order to SAMBASERVER see and authenticate my user? Thanks in advance!

    Read the article

  • Setting up home DNS with Ubuntu Server

    - by Zeophlite
    I have a webserver (with static IP 192.168.1.5), and I want to have my machines on my local network to be able to access it without modifying /etc/hosts (or equivalent for Windows/OSX). My router has Primary DNS server 192.168.1.5 Secondary DNS server 8.8.8.8 (Google's public DNS). Nginx is set up to server websites externally as *.example.com Internally, I want *.example.local to point to the server. My webserver has BIND9 installed, but I'm unsure of the settings. I've been through various contradicting tutorials, and so most of my settings have been clobbered. I've stripped out the lines which I'm confused about. The tutorials I looked at are http://tech.surveypoint.com/blog/installing-a-local-dns-server-behind-a-hardware-router/ and http://ubuntuforums.org/showthread.php?t=236093 . They mostly differ on what should be put in /etc/bind/zones/db.example.local and /etc/bind/zones/db.192, so I've left the conflicting lines out below. Can someone suggest what the correct lines are to give my above behaviour (namely *.example.local pointing to 192.168.1.5)? /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.5 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.254 /etc/hostname avalon /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN /etc/bind/named.conf.options options { directory "/var/cache/bind"; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; /etc/bind/named.conf.local zone "example.local" { type master; file "/etc/bind/zones/db.example.local"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/db.192"; }; /etc/bind/zones/db.example.local $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 5 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL /etc/bind/zones/db.192 $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 4 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; What do I need to add to the above files so that on a laptop on the internal network, I can type in webapp.example.local, and be served by my webserver? EDIT I made several changes to the above files on the webserver. /etc/network/interfaces (end of file) dns-nameservers 127.0.0.1 dns-search example.local /etc/bind/zones/db.example.local (end of file) @ IN NS avalon.example.local. @ IN A 192.168.1.5 avalon IN A 192.168.1.5 webapp IN A 192.168.1.5 www IN CNAME 192.168.1.5 /etc/bind/zones/db.192 (end of file) IN NS avalon.example.local. 73 IN PTR avalon.example.local. As a side note, my spare Win7 machine was able to connect directly to webapp.example.local, but for a Ubuntu 13.10 machine, I had to make the following changes as well (not on the webserver, but on a separate machine): /etc/nsswitch.conf before hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 after hosts: files dns /etc/NetworkManager/NetworkManager.conf before dns=dnsmasq after #dns=dnsmasq The issue remains that its not wildcard DNS, and so I have to add entries to /etc/bind/zones/db.example.local for webapp1, webapp2, ...

    Read the article

  • A VS2010 Project Made From Post: How to: Host a WCF Service in a Managed Windows Service

    MSDN has a very nice article on how to create a windows service that hosts a Windows Communication Foundation (WCF) service.  It explains all the details of doing this in a step by step fashion.  One thing that I often find missing from these articles is the actual Visual Studio project that I can download [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • AWStats: Visits from IP address vs Crawlers

    - by user3651934
    I use AWStats in cPanel to see stats of my website. Under Hosts section I see one IP address that has visited 150 pages. I am not sure if one person would have visited 150 pages using a browser. But if these 150 pages have been visited using a software application, then should not it be listed under Robots/Spider section. So how do I determine if I should block a certain IP address that has visited several hundred pages of my website? Thanks

    Read the article

  • Algorithmia Source Code released on CodePlex

    - by FransBouma
    Following the release of our BCL Extensions Library on CodePlex, we have now released the source-code of Algorithmia on CodePlex! Algorithmia is an algorithm and data-structures library for .NET 3.5 or higher and is one of the pillars LLBLGen Pro v3's designer is built on. The library contains many data-structures and algorithms, and the source-code is well documented and commented, often with links to official descriptions and papers of the algorithms and data-structures implemented. The source-code is shared using Mercurial on CodePlex and is licensed under the friendly BSD2 license. User documentation is not available at the moment but will be added soon. One of the main design goals of Algorithmia was to create a library which contains implementations of well-known algorithms which weren't already implemented in .NET itself. This way, more developers out there can enjoy the results of many years of what the field of Computer Science research has delivered. Some algorithms and datastructures are known in .NET but are re-implemented because the implementation in .NET isn't efficient for many situations or lacks features. An example is the linked list in .NET: it doesn't have an O(1) concat operation, as every node refers to the containing LinkedList object it's stored in. This is bad for algorithms which rely on O(1) concat operations, like the Fibonacci heap implementation in Algorithmia. Algorithmia therefore contains a linked list with an O(1) concat feature. The following functionality is available in Algorithmia: Command, Command management. This system is usable to build a fully undo/redo aware system by building your object graph using command-aware classes. The Command pattern is implemented using a system which allows transparent undo-redo and command grouping so you can use it to make a class undo/redo aware and set properties, use its contents without using commands at all. The Commands namespace is the namespace to start. Classes you'd want to look at are CommandifiedMember, CommandifiedList and KeyedCommandifiedList. See the CommandQueueTests in the test project for examples. Graphs, Graph algorithms. Algorithmia contains a sophisticated graph class hierarchy and algorithms implemented onto them: non-directed and directed graphs, as well as a subgraph view class, which can be used to create a view onto an existing graph class which can be self-maintaining. Algorithms include transitive closure, topological sorting and others. A feature rich depth-first search (DFS) crawler is available so DFS based algorithms can be implemented quickly. All graph classes are undo/redo aware, as they can be set to be 'commandified'. When a graph is 'commandified' it will do its housekeeping through commands, which makes it fully undo-redo aware, so you can remove, add and manipulate the graph and undo/redo the activity automatically without any extra code. If you define the properties of the class you set as the vertex type using CommandifiedMember, you can manipulate the properties of vertices and the graph contents with full undo/redo functionality without any extra code. Heaps. Heaps are data-structures which have the largest or smallest item stored in them always as the 'root'. Extracting the root from the heap makes the heap determine the next in line to be the 'maximum' or 'minimum' (max-heap vs. min-heap, all heaps in Algorithmia can do both). Algorithmia contains various heaps, among them an implementation of the Fibonacci heap, one of the most efficient heap datastructures known today, especially when you want to merge different instances into one. Priority queues. Priority queues are specializations of heaps. Algorithmia contains a couple of them. Sorting. What's an algorithm library without sort algorithms? Algorithmia implements a couple of sort algorithms which sort the data in-place. This aspect is important in situations where you want to sort the elements in a buffer/list/ICollection in-place, so all data stays in the data-structure it already is stored in. PropertyBag. It re-implements Tony Allowatt's original idea in .NET 3.5 specific syntax, which is to have a generic property bag and to be able to build an object in code at runtime which can be bound to a property grid for editing. This is handy for when you have data / settings stored in XML or other format, and want to create an editable form of it without creating many editors. IEditableObject/IDataErrorInfo implementations. It contains default implementations for IEditableObject and IDataErrorInfo (EditableObjectDataContainer for IEditableObject and ErrorContainer for IDataErrorInfo), which make it very easy to implement these interfaces (just a few lines of code) without having to worry about bookkeeping during databinding. They work seamlessly with CommandifiedMember as well, so your undo/redo aware code can use them out of the box. EventThrottler. It contains an event throttler, which can be used to filter out duplicate events in an event stream coming into an observer from an event. This can greatly enhance performance in your UI without needing to do anything other than hooking it up so it's placed between the event source and your real handler. If your UI is flooded with events from data-structures observed by your UI or a middle tier, you can use this class to filter out duplicates to avoid redundant updates to UI elements or to avoid having observers choke on many redundant events. Small, handy stuff. A MultiValueDictionary, which can store multiple unique values per key, instead of one with the default Dictionary, and is also merge-aware so you can merge two into one. A Pair class, to quickly group two elements together. Multiple interfaces for helping with building a de-coupled, observer based system, and some utility extension methods for the defined data-structures. We regularly update the library with new code. If you have ideas for new algorithms or want to share your contribution, feel free to discuss it on the project's Discussions page or send us a pull request. Enjoy!

    Read the article

  • Start Developing a Multiplayer Online Client to host existing video game

    - by Rami.Shareef
    GameRanger Garena ... etc I'm planning to start developing a small online client like these mentioned above (for friends usage), where the player that hosts the game is the server him self. was looking through the web for something to start with, but couldnt find any resources for this request!. Planning to do it with .NET technology, I have a good decent development experience. Any good resources to start with? the game I'm aiming to support is WarCraft III The frozen throne as start

    Read the article

  • SQLAuthority News – Technical Review of Learning at Koenig Solutions

    - by pinaldave
    Yesterday I finished my 3 days fast track in person learning of course End to End SQL Server Business Intelligence at Koenig Solutions. You can read my previous article over here regarding why am I learning SQL Server. Yesterday I blogged about my experience of arriving to Training Center and my induction with the center. The Training Days I had enrolled for three days training so my routine each of the three days was very much same. However, the content every day was different as I was learning something new every day. Let me describe a few of the interesting details of my daily routine. A Single Student Batch The best part of my training was that in my training batch, I am single student. Koenig is known to smaller batches and often they have single student batches as well. I was very much delighted to know that I will have dedicated access and attention from my trainer in my batch as I will be single student in my batch. In most of the labs I have observed there are no more than 4 students at any time. Prakash and Pinal 7:30 AM Breakfast Talk We all students gather at 7:30 in breakfast area. The best time of the day. I was the only Indian student in the group. The other students were from USA, Canada, Nigeria, Bhutan, Tanzania, and a few others from other countries. I immediately become the source of information and reference manual. Though the distance between Delhi and Bangalore is 2000+ KM I was considered as a local guy. 8:30 AMHeading to Training Center Every day without fail at 8:30 the van started from our accommodation to the training center. As mentioned in an earlier blog post the distance is about 5 minutes and we were able to reach at the location before 8:45. This gave us some time settle in before our class starts at 9:00 AM. 9:00 AM Order Lunch Food Well it may sound funny that we just had breakfast 30 minutes but the first thing everybody has to do is to order lunch as soon as the class starts. There is an online training portal to order food for the day. Everybody has to place their order early during the day so the food arrives on time during lunch time. Everybody can order whatever they want to order using an online ordering system. The options are plenty and everybody can order what they like. 9:05 AM Learning Starts After deciding the lunch we started the learning. I was very fortunate to have a very experienced trainer - Prakash Chheatry. Though I have never met him before I have heard a lot about Prakash. He is known as the top most SQL Server Trainer in India. His student list contains some of the very well known SQL Server Experts of the world and few of SQL Server “best seller” book authors. Learning continues till 1:00 PM with one tea-coffee break in between. 1:00 PM Lunch The lunch time is again the fun time. We all students get together in the afternoon and tell the stories of the world. Indeed the best part of the day beside learning new stuff. 4:55 PM Ready to Return We stop at 4:55 as at precisely 5:00 PM the van stops by the institute which takes us back to our accommodation. Trust me seriously long long day always but the amount of the learning is the win of the day. 7:30 PM Dinner Time After coming back to the accommodation I study till 7:30 and then rush for dinner. Dinner is world cuisine and deserts are really delicious. After dinner every day I have written a blog and retired early as the next day is always going to be busier than the present day. What did I learn As I mentioned earlier I know SQL Server fairly well. I had expressed the same in my conversation as well. This is the reason I was assigned a fairly senior trainer and we learned everything quite quickly. As I know quite a few things we went pretty fast in many topics. There were a few things, I wanted to learn in detail as well practice on the labs. We slowed down where we wanted and rush through the concepts where I was very comfortable. Here is the list of the things which we covered in action pack three days. Introduction to Business Intelligence (Intro) SQL Server Analysis Service (Theory and Lab) SQL Server Integration Service  (Theory and Lab) SQL Server Reporting Service  (Theory and Lab) SQL Server PowerPivot (Lab) UDM (Theory) SharePoint Concepts (Theory) Power View (Demo) Business Intelligence and Security (Discussion) Well, I was delighted that I was able to refresh lots of concepts during these three days. Thanks to my trainer and my friend who helped me to have a good learning experience. I believe all the learning  will help me in my growth and future career. With this I end my this experience. I am planning to have another online learning experience later this month. I will blog about my experience as I begin it. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Dartisans ep. 10: Dart Plugin for IntelliJ

    Dartisans ep. 10: Dart Plugin for IntelliJ Ask and vote for questions at: goo.gl Edit and debug your Dart apps with IntelliJ and WebStorm! In this episode of Dartisans, we'll talk to the engineers working on this exciting project. Join hosts Seth Ladd and JJ Behrens to learn more about writing Dart apps with JetBrain's powerful editors. From: GoogleDevelopers Views: 1279 35 ratings Time: 35:25 More in Science & Technology

    Read the article

  • Videos of my MonoTouch and Mono and Mobile sessions from NDC 2011

    - by Chris Hardy (ChrisNTR)
    Two weeks ago, I was in Oslo, Norway getting ready to present a few talks at the Norwegian Developer's Conference 2011 and now two weeks later, it's about time I point you to my MonoTouch and Mono and Mobile talks from the conference! First I would like to thanks for everyone involved with the conference, the hosts, the staff, the speakers and the attendees. There was so many great talks going on that you're forced to download the videos afterwards! All the videos from the conference are up on the...(read more)

    Read the article

  • IIS and PAE

    - by Latest Microsoft Blogs
    I recently got a question by one of my customers about PAE and IIS that I thought I’d share the answer to. Their environment looked something like this: 32bit OS (Windows 2003) IIS 6 with multiple application pools, where each app pool hosts a number Read More......(read more)

    Read the article

  • EPM 11.1.2 - R&A DATABASE CONNECTIONS DISAPPEAR FROM THE "DATABASE CONNECTION MANAGER

    - by Powder
    When accessing the database connection panel through Reporting and Analysis all previously entered database connection do not appear. This is due to a bug in the Windows SMB2 protocol. To work around this bug you have to disable the protocol. On Windows 2008 the protocol is automatically enabled. This needs to be done on both the servers and the clients. Note that “server” is the server which hosts RAF repository service and RM1 folder, “client” – server which hosts replicated Repository service that accesses repository files via network i.e. \\<server_host>\RM1  In order to disable SMB 2.0 on the server side, follow these steps:  1. Run "regedit" on Windows Server 2008 based computer.  2. Expand and locate the sub tree as follows.  HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters  3. Add a new REG_DWORD key with the name of "Smb2" (without quotation mark)  Value name: Smb2  Value type: REG_DWORD  0 = disabled  1 = enabled 4. Set the value to 0 to disable SMB 2.0, or set it to 1 to re-enable SMB 2.0.  5. Reboot the server.  To disable SMB 2.0 for Windows Vista or Windows Server 2008 systems that are the “client” systems run the following commands:  sc config lanmanworkstation depend= bowser/mrxsmb10/nsi  sc config mrxsmb20 start= disabled  Note there's an extra " " (space) after the "=" sign.  To enable back SMB 2.0 for Windows Vista or Windows Server 2008 systems that  are the “client” systems run the following commands: sc config lanmanworkstation depend= bowser/mrxsmb10/mrxsmb20/nsi  sc config mrxsmb20 start= auto  Again, note there's an extra " " (space) after the "=" sign. 

    Read the article

  • Sea Monkey Sales & Marketing, and what does that have to do with ERP?

    - by user709270
    Tier One Defined By Lyle Ekdahl, Oracle JD Edwards Group Vice President and General Manager  I recently became aware of the latest Sea Monkey Sales & Marketing tactic. Wait now, what is Sea Monkey Sales & Marketing and what does that have to do with ERP? Well if you grew up in USA during the 50’s, 60’s and maybe a bit in the early 70’s there was a unifying media of culture known as the comic book. I was a big Iron Man fan. I always liked the troubled hero aspect of Tony Start and hey he was a technologist. This is going somewhere, just hold on. Of course comic books like most media contained advertisements. Ninety pound weakling transformed by Charles Atlas in just 15 minutes per day. Baby Ruth, Juicy Fruit Gum and all assortments of Hostess goodies were on display. The best ad was for the “Amazing Live Sea-Monkeys – The real live fun-pets you grow yourself!” These ads set the standard for exaggeration and half-truth; “…they love attention…so eager to please, they can even be trained…” The cartoon picture on the ad is of a family of royal looking sea creatures – daddy, mommy, son and little sis – sea monkey? There was a disclaimer at the bottom in fine print, “Caricatures shown not intended to depict Artemia.” Ok what ten years old knows what the heck artemia is? Well you grow up fast once you’ve been separated from your buck twenty five plus postage just to discover that it is brine shrimp. Really dumb brine shrimp that don’t take commands or do tricks. Unfortunately the technology industry is full of sea monkey sales and marketing. Yes believe it or not in some cases there is subterfuge and obfuscation used to secure contracts. Hey I get it; the picture on the box might not be the actual size. Make up what you want about your product, but here is what I don’t like, could you leave out the obvious falsity when it comes to my product, especially the negative stuff. So here is the latest one – “Oracle’s JD Edwards is NOT tier one”. Really? Definition please! Well a whole host of googleable and reputable sources confirm that a tier one vendor is large, well known, and enjoys national and international recognition. Let me see large, so thousands of customers? Oh and part of the world’s largest business software and hardware corporation? Check and check JD Edwards has that and that. Well known, enjoying national and international recognition? Oracle’s JD Edwards EnterpriseOne is available in 21 languages and is directly localized in 33 countries that support some of the world’s largest multinationals and many midsized domestic market companies. Something on the order of half the JD Edwards customer base is outside North America. My passport is on its third insert after 2 years and not from vacations. So if you don’t mind I am going to mark national and international recognition in the got it column. So what else is there? Well let me offer a few criteria. Longevity – The JD Edwards products benefit from 35+ years of intellectual property development; through booms, busts, mergers and acquisitions, we are still here Vision & innovation – JD Edwards is the first full suite ERP to run on the iPad as just one example Proven track record of execution – Since becoming part of Oracle, JD Edwards has released to the market over 20 deliverables including major release, point releases, new apps modules, tool releases, integrations…. Solid, focused functionality with a flexible, interoperable, extensible underlying architecture – JD Edwards offers solid core ERP with specialty modules for verticals all delivered on a well defined independent tools layer that helps enable you to scale your business without an ERP reimplementation A continuation plan – Oracle’s JD Edwards offers our customers a 6 year roadmap as well as interoperability with Oracle’s next generation of applications Oh I almost forgot that the expert sources agree on one additional thing, tier one may be a preferred vendor that offers product and services to you with appealing value. You should check out the TCO studies of JD Edwards. I think you will see what the thousands of customers that rely on these products to run their businesses enjoy – that is the tier one solution with the lowest TCO. Oh and if you get an offer to buy an ERP for no license charge, remember the picture on the box might not be the actual size. 

    Read the article

  • Apache on Windows - splitting vHost logs

    - by Cylindric
    I have a Windows Server 2008 running Apache, and it will be hosting several virtual hosts. I'd rather not use the logrotate tool (|bin/logrotate), as it seems create significant extra overhead with all the processes. Is there a simple Windows alternative to get the log entries from a combined log file split into several per-site files? Preferably with custom output directories, but that is optional.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >