Search Results

Search found 2058 results on 83 pages for 'chain of responsibility'.

Page 60/83 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Can I edit the snapshots of websites on most visited sites page in Chrome?

    - by arik
    I saw the previous chain of discussion, but did not find a way to continue the thread - I have found the "Preference" (I have Windows 7), but in that, I did not find clearly what to modify. I did find a section called 'URLs pinned" or something like this, but it did NOT match fully the ones I have. I have activated the 'profile sync' for Chrome - don't know if it has any effect. Can you manually edit the icons of the most visited sites, for the new tab page in Chrome? When you open a new tab in Chrome, I get the 'new page' tab, where I selected the 'most popular', so I have 8 icons with the 8 most popular sites I visited; I can also pin any one of those I see on screen, such that they remain 'permanent' there. We are missing one which would point to, say, "hotmail". So, I was looking for a way to add 'hotmail' to be one of the 8. (and, by the way, we ticked the 'X' on one of those 8, and now it remains grey / shows nothing in it). So, my double-question: How can I add a URL of my choice into one of those 8 spaces? How can I restore usage of the last one?

    Read the article

  • In C# should I reuse a function / property parameter to compute temp result or create a temporary v

    - by Hamish Grubijan
    The example below may not be problematic as is, but it should be enough to illustrate a point. Imagine that there is a lot more work than trimming going on. public string Thingy { set { // I guess we can throw a null reference exception here on null. value = value.Trim(); // Well, imagine that there is so much processing to do this.thingy = value; // That this.thingy = value.Trim() would not fit on one line ... So, if the assignment has to take two lines, then I either have to abusereuse the parameter, or create a temporary variable. I am not a big fan of temporary variables. On the other hand, I am not a fan of convoluted code. I did not include an example where a function is involved, but I am sure you can imagine it. One concern I have is if a function accepted a string and the parameter was "abused", and then someone changed the signature to ref in both places - this ought to mess things up, but ... who would knowingly make such a change if it already worked without a ref? Seems like it is their responsibility in this case. If I mess with the value of value, am I doing something non-trivial under the hood? If you think that both approaches are acceptable, then which do you prefer and why? Thanks.

    Read the article

  • Resource reference passing in puppet

    - by paweloque
    Is it possible to pass puppet resource references to other resources? My use-case is to build a jenkins build pipeline with puppet. To chain jenkins jobs into a pipeline I need to pass the successor job to a job. A subset of the definition is: jobs::build { "Build ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => 'Deploy', } jobs::deploy { "Deploy ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => 'Smoke Test', } In the def you see that I define the successors by name, i.e. 'Deploy' and in case of the second job 'Smoke Test'. What I'd like to do is to pass a reference to a resource and extract the name from it: jobs::build { "Build ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => Jobs::Deploy["Deploy ${release_name}"], } jobs::deploy { "Deploy ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => Jobs::Smoke_test["Smoke Test ${release_name}"], } And then within the jobs::deploy and jobs::build definition I'd access the resource by reference and query for it's type, etc.. Is it possible to achieve this in puppet?

    Read the article

  • fail2ban log parsing too slow on Raspberry Pi - options? [migrated]

    - by Gordon Morehouse
    I'm running fail2ban on a Raspberry Pi at 950MHz which I cannot overclock further. The Pi is occasionally subject to SYN floods on particular ports. I've set up iptables to throttle the rate of SYNs on the port of interest; when the throttle limits are exceeded, hosts which send SYNs are dropped into the REJECT chain and the particular SYN packet which exceeded the limit is logged. fail2ban then watches for these logged SYNs and, after seeing a few, temporarily bans the host for a short time (this is a transient issue in the app I'm working with). The problem is that the SYN floods can occasionally reach rates which are too fast for fail2ban to keep up with; I'll see 20-40 log messages per second, and eventually fail2ban falls behind and becomes ineffective. To add insult to injury, it continues consuming a LOT of CPU as it tries to catch up. I have verified that DROP chained packets from hosts already banned by fail2ban are not logged, and thus do not add to its load. What are my options here? I have a few ideas, but no clear path forward. Could I make the log-parse regex "easier" so it takes fewer cycles? Would using iptables --log-prefix to put a token near the start of the log message, and/or otherwise simplifying/altering the fail2ban regex help? Here is the current fail2ban config line containing a regex: failregex = kernel:.*?SRC=(?:::f{4,6}:)?(?P<host>[\w\-.^_]+) DST.*?SYN Is there a faster way for fail2ban to watch for the packets exceeding the limits than parsing kern.log? Could fail2ban be run under PyPy instead of CPython with minimal nonstandard wizardry (the OS is Raspbian 7, so, mostly Debian 7)? Is there something better than fail2ban that I could use to watch for the packets which exceed the SYN limits, and after N exceeds in X seconds, temporarily put the offending IP into the iptables DROP bucket, and take it out when the ban timer expires? Again, I'd vastly prefer a solution that uses as much software available in Debian as possible, though I can build Debian packages in a pinch.

    Read the article

  • changing user permissions dynamically

    - by ephieste
    I am designing a system on SharePoint. There is a approval list for the items. The members can approve, reject and edit the items. One from approval list has to fill the "assigned to" field in the item while approving it. The user who is added to "assigned to" field should able to edit the content of the item after it is approved. So, how can I give the edit permission to the users after they are added assigned to field of a specific item? The situation is: approval list: A, B ,C (edit, view permission) users: x,y,z .... (no permission, view after approval) items: item1, item2, item3.... items are invisible. A approved the item1 and added X to "assigned to" field. It means This item is under X's responsibility. But X hasn't got edit permission. we can't give edit permission to X for every item. He should edit the items after he is written into the "assigned to" field. How can I create this workflow in SharePoint? Please urgent help needed.

    Read the article

  • Is "DSLAM congestion" a legitimate reason for slow DSL?

    - by Jay Bazuzi
    My DSL has been extremely slow in the evenings recently. To test it, I telnet to my DSL Modem, and ping the gateway. This way I eliminate internet congestion and local network issues. In the mornings I get 30ms - 50ms pings. In the evenings, it bounces around a lot, but 10000ms pings are common. I complained to Qwest support, and they said it was a known issue on their end, their engineers were working on it, and wouldn't say anything else. A couple days later I complained again, and they sent out a technician. He tested my house wiring and found that one of them had a short. It was an unused line, so we disconnected it, and he said things looked better and left. My daytime speeds improved at this point, but evening is still bad. I complained to Qwest support again, and they said it was a problem with DSLAM congestion at their end, and that they were working on it, but no ETA. My neighbor has Qwest DSL and doesn't seem to have these problems. That seems strange. I go use her network when I absolutely must get online and mine is behaving badly. I can't tell if they're yanking my chain or not. Regardless, these speeds are crap. I'm paying for 7Mpbs but am lucky if I get 1/10th that in the evenings. My kids like to watch Netflix streaming movies, and it's just impossible after 5pm or so. Should I wait it out? Will complaining again produce any results? Should I change my subscription to a lower speed until they fix their end? Or switch to cable?

    Read the article

  • Getting Grub2 to recognize a Raid 10 boot/root

    - by xenoterracide
    I've been trying to get my raid to boot from grub2 for about 2 days now and I don't seem to be getting closer. The problem appears to be that it doesn't recognize my raid at all. It doesn't see (md0) etc. I'm not sure Why or how to change this. I'm using mdadm, 2 device (essentially a raid1) raid10,f2, which is currently degraded. I have tried adding the raid and mdraid modules with grub install along with others. I've tried several variation on grub-install such as grub-install --debug --no-floppy --modules="biosdisk part_msdos chain raid mdraid ext2 linux search ata normal" /dev/md0 I've been searching the net for an answer to what I haven't done but no luck. On my other drive which I plan on removing the raid is initialized and mounted fine on boot, but it's not the boot/root for that setup. My grub.cfg isn't recognized by grub since it can't read the raid partition so I'm not posting that. md0 is not listed in my /boot/grub/device.map.

    Read the article

  • How do I manage application configuration in ASP.NET?

    - by GlennS
    I am having difficulty with managing configuration of an ASP.Net application to deploy for different clients. The sheer volume of different settings which need twiddling takes up large amounts of time, and the current configuration methods are too complicated to enable us to push this responsibility out to support partners. Any suggestions for better methods to handle this or good sources of information to research? How we do things at present: Various xml configuration files which are referenced in Web.Config, for example an AppSettings.xml. Configurations for specific sites are kept in duplicate configuration files. Text files containing lists of data specific to the site In some cases, manual one-off changes to the database C# configuration for Windsor IOC. The specific issues we are having: Different sites with different features enabled, different external services we have to talk to and different business rules. Different deployment types (live, test, training) Configuration keys change across versions (get added, remove), meaning we have to update all the duplicate files We still need to be able to alter keys while the application is running Our current thoughts on how we might approach this are: Move the configuration into dynamically compiled code (possibly Boo, Binsor or JavaScript) Have some form of diffing/merging configuration: combine a default config with a live/test/training config and a site-specific config

    Read the article

  • Calendar sharing between unrelated Domains

    - by vlannoob
    I have a 'request' from one of the 'big guys' that has me scratching my head a bit. He is one of our Executives that also is a board member of several other organistations, so he floats around between 4 different unrelated companies, each with their own domains, Exchange setup etc. He has domain accounts in each organisation. He has an iPad with multiple Exchange accounts so he can see all his calendars which works Ok for him - Apple calendaring bugs/flaws aside. What he wants is the ability for 'reception staff' at each organisation to 'see' all his calendars as they are booking things for him in their respective organisations calendars without it conflicting with bookings made in his calendars by other organisations......you with me?? So for example: Company A books a meeting into his Comapny A calendar at 9am Monday and Company B books him a meeting in his Company B Calendar at 9:15am Monday on the other side of town and of course Company C has him booked in all day Monday on their Company C calendar. He gets all those on his iPad but he would like either a 'global' calendar all can see and book into or the ability for receptionists at Company's A,B and C to see all the Calendars to avoid these kind of conflicts. I told him to 'go away' straight off the bat, I don't control anything to do with the other companies or know their infrastructure. And quite frankly I don't want any part of it...but he's whining and he's high enough up the food chain that I can't ignore him forever. I'm open to suggestions. Is there any third party software/services that can facilitiate this kind of setup? I really don't want to be creating users in my AD structure to people not ion our organisation so they can get access to his calendar and I am sure there sysadmins feel the same. As usual - any advice is greatly appreciated ;)

    Read the article

  • How to verify a self-signed certificate from another server using openssl?

    - by ntsue
    I am new to openssl and I am having some trouble verifying (from a client machine) an ftp server using ssl with a self-signed certificate. I generated the .cer file by going to my server in IIS and exporting the certificate without the private key. I believe that this is all that I should need on the client side, right? I use the following code to verify the certificate openssl verify ftp.cer and the error that I get back is error 20 at 0 depth lookup:unable to get local issuer certificate I tried this as well: openssl verify -CAfile ftp.cer ftp.cer but received the same error. From what I understand about SSL, this is happening because I have no chain of trust that connects to this server. By default, openssl did not install any trusted CAs and this is fine. I would just like to tell it to trust this server. I tried various tutorials telling me how to add a certificate authority, including this one here, however the instructions are for linux and include adding a symlink and I am trying to do this in windows. If anyone could provide any guidance on how to do this, or enlighten me if I am not understanding something correctly, I would greatly appreciate it. Thanks!

    Read the article

  • Samba server NETBIOS name not resolving, WINS support not working

    - by Eric
    When I try to connect to my CentOS 6.2 x86_64 server's samba shares using address \\REPO (NETBIOS name of REPO), it times out and shows an error; if I do so directly via IP, it works fine. Furthermore, my server does not work correctly as a WINS server despite my samba settings being correct for it (see below for details). If I stop the iptables service, things work properly. I'm using this page as a reference for which ports to use: http://www.samba.org/samba/docs/server_security.html Specifically: UDP/137 - used by nmbd UDP/138 - used by nmbd TCP/139 - used by smbd TCP/445 - used by smbd I really really really want to keep the secure iptables design I have below but just fix this particular problem. SMB.CONF [global] netbios name = REPO workgroup = AWESOME security = user encrypt passwords = yes # Use the native linux password database #passdb backend = tdbsam # Be a WINS server wins support = yes # Make this server a master browser local master = yes preferred master = yes os level = 65 # Disable print support load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes # Restrict who can access the shares hosts allow = 127.0.0. 10.1.1. [public] path = /mnt/repo/public create mode = 0640 directory mode = 0750 writable = yes valid users = mangs repoman IPTABLES CONFIGURE SCRIPT # Remove all existing rules iptables -F # Set default chain policies iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP # Allow incoming SSH iptables -A INPUT -i eth0 -p tcp --dport 22222 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22222 -m state --state ESTABLISHED -j ACCEPT # Allow incoming HTTP #iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT #iptables -A OUTPUT -o eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # Allow incoming Samba iptables -A INPUT -i eth0 -p udp --dport 137 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 137 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p udp --dport 138 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 138 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --dport 139 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 139 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --dport 445 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 445 -m state --state ESTABLISHED -j ACCEPT # Make these rules permanent service iptables save service iptables restart**strong text**

    Read the article

  • Build System with Recursive Dependency Aggregation

    - by radman
    Hi, I recently began setting up my own library and projects using a cross platform build system (generates make files, visual studio solutions/projects etc on demand) and I have run into a problem that has likely been solved already. The issue that I have run into is this: When an application has a dependency that also has dependencies then the application being linked must link the dependency and also all of its sub-dependencies. This proceeds in a recursive fashion e.g. (For arguments sake lets assume that we are dealing exclusively with static libraries.) TopLevelApp.exe dependency_A dependency_A-1 dependency_A-2 dependency_B dependency_B-1 dependency_B-2 So in this example TopLevelApp will need to link dependency_A, dependency_A-1, dependency_A-2 etc and the same for B. I think the responsibility of remembering all of these manually in the target application is pretty sub optimal. There is also the issue of ensuring the same version of the dependency is used across all targets (assuming that some targets depend on the same things, e.g. boost). Now linking all of the libraries is required and there is no way of getting around it. What I am looking for is a build system that manages this for you. So all you have to do is specify that you depend on something and the appropriate dependencies of that library will be pulled in automatically. The build system I have been looking at is premake premake4 which doesn't handle this (as far as I can determine). Does anyone know of a build system that does handle this? and if there isn't then why not?

    Read the article

  • Business Layer Pattern on Rails? MVCL

    - by Fabiano PS
    That is a broad question, and I appreciate no short/dumb asnwers like: "Oh that is the model job, this quest is retarded (period)" PROBLEM Where I work at people created a system over 2 years for managing the manufacture process over demand in the most simplified still broad as possible, involving selling, buying, assemble, The system is coded over Ruby On Rails. The result has been changed lots of times and the result is a mess on callbacks (some are called several times), 200+ models, and fat controllers: Total bad. The QUESTION is, if there is a gem, or pattern designed to handle Rails large app logic? The logic whould be able to fully talk to models (whose only concern would be data format handling and validation) What I EXPECT is to reduce complexity from various controllers, and hard to track callbacks into files with the responsibility to handle a business operation logic. In some cases there is the need to wait for a response, in others, only validation of the input is enough and a bg process would take place. ie: -- Sell some products (need to wait the operation to finish) 1. Set a View able to get the products input 2. Controller gets the product list inputed by employee and call the logic Logic::ExecuteWithResponse('sell', 'products', :prods => @product_list_with_qtt, :when => @date, :employee => current_user() ) This Logic would handle buying order, assemble order, machine schedule, warehouse reservation, and others

    Read the article

  • Graph limitations - Should I use Decorator?

    - by Nick Wiggill
    I have a functional AdjacencyListGraph class that adheres to a defined interface GraphStructure. In order to layer limitations on this (eg. acyclic, non-null, unique vertex data etc.), I can see two possible routes, each making use of the GraphStructure interface: Create a single class ("ControlledGraph") that has a set of bitflags specifying various possible limitations. Handle all limitations in this class. Update the class if new limitation requirements become apparent. Use the decorator pattern (DI, essentially) to create a separate class implementation for each individual limitation that a client class may wish to use. The benefit here is that we are adhering to the Single Responsibility Principle. I would lean toward the latter, but by Jove!, I hate the decorator Pattern. It is the epitome of clutter, IMO. Truthfully it all depends on how many decorators might be applied in the worst case -- in mine so far, the count is seven (the number of discrete limitations I've recognised at this stage). The other problem with decorator is that I'm going to have to do interface method wrapping in every... single... decorator class. Bah. Which would you go for, if either? Or, if you can suggest some more elegant solution, that would be welcome. EDIT: It occurs to me that using the proposed ControlledGraph class with the strategy pattern may help here... some sort of template method / functors setup, with individual bits applying separate controls in the various graph-canonical interface methods. Or am I losing the plot?

    Read the article

  • How to organize the work when project needs to be re-implemented due to poor code quality?

    - by Dmitriy Nagirnyak
    Hi, I have joined a very small where one main developer has been buiding the web app (.NET 4.0) during ~6 months. The project should be delivered within next 2 months. After first look at the code I can say that I would never allow it to go to production (things like catch { }, not tests at all with WebForms etc). So the code quality is incredibly low. My task is to improve that and still deliver the solution. So I plan to start with unit testing and MVC2 reimplementing most of the functionality (though using some of the existing code). I estimate that I will need about 6 weeks to catch up with the current progress and be on te same functionality level as the application will be in 6 months. The problem is that the main developer who has been working on the project does not seem to be very 'professional' and skillful (he seems to be really starting in IT and many basic things are unknown to him). It will take significant amount of time and effort to educate him how to do the proper testing, development and apply some patterns. I am ready to take responsibility for the reimplemnting the application but at the same time I don't want the main developer to be on idle but as he won't be able to significantly contribute to the better-world project at this stage I am not sure what would the best way to keep productivity high for both of us. Currently I think following solution is good enough: He proceeds doing what he does until I will catch up with him and then start working on a new project together. The problem is that of course this approach is not very productive as one developer will do better-world project while the other will proceed with what he did, effectively doing similar tasks. Can you suggest how we could better organise the work together in order to be most efficient for the overall project? Thanks, Dmitriy.

    Read the article

  • Add fields to Django ModelForm that aren't in the model

    - by Cyclic
    I have a model that looks like: class MySchedule(models.Model): start_datetime=models.DateTimeField() name=models.CharField('Name',max_length=75) With it comes its ModelForm: class MyScheduleForm(forms.ModelForm): startdate=forms.DateField() starthour=forms.ChoiceField(choices=((6,"6am"),(7,"7am"),(8,"8am"),(9,"9am"),(10,"10am"),(11,"11am"), (12,"noon"),(13,"1pm"),(14,"2pm"),(15,"3pm"),(16,"4pm"),(17,"5pm"), (18,"6pm" startminute=forms.ChoiceField(choices=((0,":00"),(15,":15"),(30,":30"),(45,":45")))),(19,"7pm"),(20,"8pm"),(21,"9pm"),(22,"10pm"),(23,"11pm"))) class Meta: model=MySchedule def clean(self): starttime=time(int(self.cleaned_data.get('starthour')),int(self.cleaned_data.get('startminute'))) return self.cleaned_data try: self.instance.start_datetime=datetime.combine(self.cleaned_data.get("startdate"),starttime) except TypeError: raise forms.ValidationError("There's a problem with your start or end date") Basically, I'm trying to break the DateTime field in the model into 3 more easily usable form fields -- a date picker, an hour dropdown, and a minute dropdown. Then, once I've gotten the three inputs, I reassemble them into a DateTime and save it to the model. A few questions: 1) Is this totally the wrong way to go about doing it? I don't want to create fields in the model for hours, minutes, etc, since that's all basically just intermediary data, so I'd like a way to break the DateTime field into sub-fields. 2) The difficulty I'm running into is when the startdate field is blank -- it seems like it never gets checked for non-blankness, and just ends up throwing up a TypeError later when the program expects a date and gets None. Where does Django check for blank inputs, and raise the error that eventually goes back to the form? Is this my responsibility? If so, how do I do it, since it doesn't evaluate clean_startdate() since startdate isn't in the model. 3) Is there some better way to do this with inheritance? Perhaps inherit the MyScheduleForm in BetterScheduleForm and add the fields there? How would I do this? (I've been playing around with it for over an hours and can't seem to get it) Thanks! [Edit:] Left off the return self.cleaned_data -- lost it in the copy/paste originally

    Read the article

  • Make isolinux 4.0.3 chainload itself in VMWare

    - by chainloader
    I have a bootable iso which boots into isolinux 4.0.3 and I want to make it chainload itself (my actual goal is to chainload isolinux.bin v4.0.1-debian, which should start up the Ubuntu10.10 Live CD, but for now I just want to make it chainload itself). I can't get isolinux to chainload any isolinux.bin, no matter what version. It either freezes or shows a "checksum error" message. I'm using VMWare to test the iso. Things I have tried: .com32 /boot/isolinux/chain.c32 /boot/isolinux/isolinux-debug.bin (chainload self) this shows Loading the boot file... Booting... ISOLINUX 4.03 2010-10-22 Copyright (C) 1994-2010 H. Peter Anvin et al isolinux: Starting up, DL = 9F isolinux: Loaded spec packet OK, drive = 9F isolinux: Main image LBA = 53F00100 ...and the machine freezes. Then I've tried this (chainload GRUB4DOS 0.4.5b) chainloader /boot/isolinux/isolinux-debug.bin Result: Error 13: Invalid or unsupported executable format Next try: (chainload GRUB4DOS 0.4.5b) chainloader --force /boot/isolinux/isolinux-debug.bin boot Result: ISOLINUX 4.03 2010-10-22 Copyright (C) 1994-2010 H. Peter Anvin et al isolinux: Starting up, DL = 9F isolinux: Loaded spec packet OK, drive = 9F isolinux: No boot info table, assuming single session disk... isolinux: Spec packet missing LBA information, trying to wing it... isolinux: Main image LBA = 00000686 isolinux: Image checksum error, sorry... Boot failed: press a key to retry... I have tried other things, but all of them failed miserably. Any suggestions?

    Read the article

  • Observer pattern and violation of Single Principality Rule

    - by Devil Jin
    I have an applet which repaints itself once the text has changed Design 1: //MyApplet.java public class MyApplet extends Applet implements Listener{ private DynamicText text = null; public void init(){ text = new DynamicText("Welcome"); } public void paint(Graphics g){ g.drawString(text.getText(), 50, 30); } //implement Listener update() method public void update(){ repaint(); } } //DynamicText.java public class DynamicText implements Publisher{ // implements Publisher interface methods //notify listeners whenever text changes } Isn't this a violation of Single Responsibility Principle where my Applet not only acts as Applet but also has to do Listener job. Same way DynamicText class not only generates the dynamic text but updates the registered listeners. Design 2: //MyApplet.java public class MyApplet extends Applet{ private AppletListener appLstnr = null; public void init(){ appLstnr = new AppletListener(this); // applet stuff } } // AppletListener.java public class AppletListener implements Listener{ private Applet applet = null; public AppletListener(Applet applet){ this.applet = applet; } public void update(){ this.applet.repaint(); } } // DynamicText public class DynamicText{ private TextPublisher textPblshr = null; public DynamicText(TextPublisher txtPblshr){ this.textPblshr = txtPblshr; } // call textPblshr.notifyListeners whenever text changes } public class TextPublisher implments Publisher{ // implements publisher interface methods } Q1. Is design 1 a SPR violation? Q2. Is composition a better choice here to remove SPR violation as in design 2.

    Read the article

  • HTB.init / tc behind NAT

    - by Ben K.
    I have an Ubuntu 10 box that I'm trying to set up as a bandwidth-shaping router. The machine has one WAN interface, eth0 and two LAN interfaces, eth1 and eth2. NAT is configured using MASQUERADE as described at InternetConnectionSharing. I'm mostly concerned with shaping outbound traffic from the LAN interfaces -- in the end, I'd like to end up with a hard 768Kbps limit per-LAN-interface (rather than a limit on eth0 pooled across all interfaces). I installed HTB.init, and riffing on the examples, tried to set this up on eth1 by putting three files into /etc/sysconfig/htb: /etc/sysconfig/htb/eth1 DEFAULT=30 R2Q=100 /etc/sysconfig/htb/eth1-2.root RATE=768Kbps BURST=15k /etc/sysconfig/htb/eth1-2:30.dfl RATE=768Kbps CEIL=788Kbps BURST=15k LEAF=sfq I can /etc/init.d/htb start and /etc/init.d/htb stats and see information that /seems/ to suggest it's working...but when I try pulling a large file via the WAN interface the shaping clearly isn't in effect. Any suggestions? My guess is it has something to do with where the shaping falls in the NAT chain, but I really have no idea where to begin troubleshooting this. ---- Update: Here's my /etc/init.d/htb list output, it seems to make sense -- the default rate for eth1 is 768Kbps? ### eth0: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth0: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b ### eth0: filtering rules filter parent 1: protocol ip pref 100 u32 filter parent 1: protocol ip pref 100 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 100 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:30 match 00000000/00000000 at 12 match 00000000/00000000 at 16 ### eth1: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth1: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b

    Read the article

  • I cannot grok MVC, what it is, and what it is not?

    - by Hao
    I cannot grok what MVC is, what mindset or programming model should I acquire so MVC stuff can instantly "lightbulb" on my head? If not instantly, what simple programs/projects should I try to do first so I can apply the neat things MVC brings to programming. OOP is intuitive and easier, object is all around us, and the benefits of code reuse using OOP-paradigm instantly click to anyone. You can probably talk to anybody about OOP in a few minutes and lecture some examples and they would get it. While OOP somehow raise the intuitiveness aspect of programming, MVC seems to do the opposite. I'm getting negative thoughts that some future employers(or even clients) would look down upon me for not using MVC technology. Though I probably get the skinnable aspect of MVC, but when I try to apply it to my own project, I don't know where to start. And also some programmers even have diverging views on how to accomplish MVC properly. Take this for instance from Jeff's post about MVC: The view is simply how you lay the data out, how it is displayed. If you want a subset of some data, for example, my opinion is that is a responsibility of the model. So maybe some programmers use MVC, but they somehow inadvertently use the View or the Controller to extract a subset of data. Why we can't have a definitive definition of what and how to accomplish MVC properly? And also, when I search for MVC .NET programs, most of it applies to web programs, not desktop apps, this intrigue me further. My guess is, this is most advantageous to web apps, there's not much problem about intermixed view(html) and controller(program code) in desktop apps.

    Read the article

  • How to diagnose a hang when creating a new folder in explorer.exe

    - by Jack Ukleja
    I have been having some issues with explorer.exe hanging when I create a new folder. If I use Analyse Wait Chain in the Resource Monitor it says "One or more threads of explorer.exe are waiting to finish network I/O". When I look at the offending thread in Process Explorer it reveals nothing interesting: ntdll.dll!ZwWaitForMultipleObjects+0xa KERNELBASE.dll!GetCurrentThread+0x36 kernel32.dll!WaitForMultipleObjectsEx+0xb3 USER32.dll!PeekMessageW+0x1cd USER32.dll!MsgWaitForMultipleObjectsEx+0x2a USER32.dll!MsgWaitForMultipleObjects+0x20 SHELL32.dll!SHAppBarMessage+0x41e SHELL32.dll!DragAcceptFiles+0x2a3c SHELL32.dll!DragAcceptFiles+0x2a4f SHELL32.dll!Ordinal211+0x124 SHELL32.dll!SHChangeNotification_Unlock+0x12f4 USER32.dll!GetSystemMetrics+0x2b1 USER32.dll!IsDialogMessageW+0x19b USER32.dll!IsDialogMessageW+0x1e1 ntdll.dll!KiUserCallbackDispatcher+0x1f USER32.dll!PeekMessageW+0xba USER32.dll!PeekMessageW+0x89 SHELL32.dll!SHChangeNotification_Unlock+0xd9f SHELL32.dll!Ordinal885+0x1407 SHLWAPI.dll!SHRegGetUSValueW+0x306 kernel32.dll!BaseThreadInitThunk+0xd ntdll.dll!RtlUserThreadStart+0x21 While I was looking at the explorer.exe threads I did notice a fair few that talk about ETW (Event Tracing for Windows) so obviously explorer.exe uses tracing. So I decided to try and user TraceView.exe to try and listen in on the explorer.exe traces. The problem is TraceView requires some difficult-to-come-by stuff... either pdbs, or CTL files, and .TMF files. I tried using the explorer.pdb that comes with the Windows SDK but that did not work. I do not see explorer.exe in the "named providers". And I have no idea where to locate the ctl or .TMF files for explorer.exe. So the question is: Is there a way to view the ETW trace messages from explorer? Or shall I just not bother and go back to the age old technique of disabling every explorer extenion one-by-one in the hope its one of them. (Prefer the former as I like to get to the bottom of things!!)

    Read the article

  • event flow in action script 3

    - by Shay
    i try to dispatch a custom event from some component on the stage and i register other component to listen to it but the other component doesn't get the event here is my code what do i miss public class Main extends MovieClip //main document class { var compSource:Game; var compMenu:Menu; public function Main() { compSource = new Game; compMenu = new Menu(); var mc:MovieClip = new MovieClip(); addChild(mc); mc.addChild(compSource); // the source of the event - event dispatch when clicked btn mc.addChild(compMenu); //in init of that Movie clip it add listener to the compSource events } } public class Game extends MovieClip { public function Game() { btn.addEventListener(MouseEvent.CLICK, onFinishGame); } private function onFinishGame(e:MouseEvent):void { var score:Number = Math.random() * 100 + 1; dispatchEvent(new ScoreChanged(score)); } } public class Menu extends MovieClip { //TextField score public function Menu() { addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event):void { removeEventListener(Event.ADDED_TO_STAGE, init); //on init add listener to event ScoreChanged addEventListener(ScoreChanged.SCORE_GAIN, updateScore); } public function updateScore(e:ScoreChanged):void { //it never gets here! tScore.text = String(e._score); } } public class ScoreChanged extends Event { public static const SCORE_GAIN:String = "SCORE_GAIN"; public var _score:Number; public function ScoreChanged( score:Number ) { trace("new score"); super( SCORE_GAIN, true); _score = score; } } I don't want to write in Main compSource.addEventListener(ScoreChanged.SCORE_GAIN, compMenu.updateScore); cause i dont want the the compSource will need to know about compMenu its compMenu responsibility to know to what events it needs to listen.... any suggestions? Thanks!

    Read the article

  • How do I keep a table in sync across multiple SQL Databases?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app." Go figure!

    Read the article

  • Multi-tenant Access Control: Repository or Service layer?

    - by FreshCode
    In a multi-tenant ASP.NET MVC application based on Rob Conery's MVC Storefront, should I be filtering the tenant's data in the repository or the service layer? 1. Filter tenant's data in the repository: public interface IJobRepository { IQueryable<Job> GetJobs(short tenantId); } 2. Let the service filter the repository data by tenant: public interface IJobService { IList<Job> GetJobs(short tenantId); } My gut-feeling says to do it in the service layer (option 2), but it could be argued that each tenant should in essence have their own "virtual repository," (option 1) where this responsibility lies with the repository. Which is the most elegant approach: option 1, option 2 or is there a better way? Update: I tried the proposed idea of filtering at the repository, but the problem is that my application provides the tenant context (via sub-domain) and only interacts with the service layer. Passing the context all the way to the repository layer is a mission. So instead I have opted to filter my data at the service layer. I feel that the repository should represent all data physically available in the repository with appropriate filters for retrieving tenant-specific data, to be used by the service layer. Final Update: I ended up abandoning this approach due to the unnecessary complexities. See my answer below.

    Read the article

  • DPM 2007 clashing with existing SQL backup job

    - by Paul D'Ambra
    I've recently installed a DPM2007 server on Server 2003 and have set up a protection group against a server 2003 server running SQL 2005 SP3. The SQL server in question has a full backup (as a sql agent job) once a day and transaction log backups hourly. These are zipped up and FTP'd to a server offsite by a scheduled task. Since adding the DPM job I'm receiving many error messages: DPM tried to do a SQL log backup, either as part of a backup job or a recovery to latest point in time job. The SQL log backup job has detected a discontinuity in the SQL log chain for database SERVER_NAME\DB_Name since the last backup. All incremental backup jobs will fail until an express full backup runs. My google-fu suggests that I need to change the full backup my sqlagent job is running to a copy_only job. But I think this means that I can't use that backup with the transaction_logs to restore the database if the building (including the DPM server) burns down. I'm sure I'm missing something obvious and thought I'd see what the hivemind suggests. It is an option to set-up a co-located DPM server elsewhere and have DPM stream the backup but that's obviously more expensive than the current set up. Many thanks in advance

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >