Search Results

Search found 10391 results on 416 pages for 'sys dm exec requests'.

Page 78/416 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Dynamic query runs directly but not through variable, what could be the reason?

    - by waheed
    Here is my scenario, I'm creating a dynamic query using a select statement which uses functions to generate the query. I am storing it into a variable and running it using exec. i.e. declare @dsql nvarchar(max) set @dsql = '' select @dsql = @dsql + dbo.getDynmicQuery(column1, column2) from Table1 exec(@dsql) Now it produces the many errors in this scenario, like 'Incorrect syntax near ','' and 'Case expressions may only be nested to level 10.' But if i take the text from @dsql and assign it a variable manually like: declare @dsql nvarchar(max) set @dsql = '' set @dsql = N'<Dynamic query text>' exec(@dsql) it runs and generates the result, what could be the reason for that ?? Thanks..

    Read the article

  • Linux - How to control Winbind Authentication cache timeout

    - by cybervedaa
    I have configured my linux machines (running CentOS 5.2) to authenticate against a Windows server running Active Directory. I have even enabled winbind offline logon. Everything works as expected, however I'm also looking to impose a TTL for the winbind authentication cache. So far all I found was the below snippet from the samba documentation winbind cache time (G) This parameter specifies the number of seconds the winbindd(8) daemon will cache user and group information before querying a Windows NT server again. **This does not apply to authentication requests**, these are always evaluated in real time unless the winbind offline logon option has been enabled. Default: winbind cache time = 300 Clearly the winbind cache time parameter does not control the cache TTL for authentication requests. Is there any other way I can implement a cache timeout for winbind authentication requests? Thank you

    Read the article

  • Works using Django development server, but throws import error with Apache using mod_wsgi

    - by Raj
    I have a Django project that works fine with the development server that comes with it. No errors are produced at all when I use "django manage.py runserver" and the app works fine, but when I try to use it with mod_wsgi and Apache the browser displays "Internal Server Error" with a 500 error code and it generates an import error in the Apache error log. Here's the error in the log: ImportError: No module named registration I'm using the Django registration module which is located in a path like this: /opt/raj/photos/registration I know that the registration app is in the path because I can fire up a Python shell, import sys, and get a list of paths using sys.path. Here are some of the paths output from Python shell: sys.path ['', '/opt/raj/pyamf', '/opt/raj', '/opt/raj/pictures', '/opt/raj/pictures /registration', '/usr/lib/python2.6',....] Any thoughts would be appreciated.

    Read the article

  • Rewrite URL before passing to proxy Lighttpd

    - by futureelite7
    Hi, I'm trying to setup a reverse proxy in lighttpd, such that all requests (and only those requests) under /mobile/video is redirected to the / directory of a secondary web server. This is pretty easy in apache, but I can't for the life of me figure out how to do so in lighttpd. $HTTP["url"] =~ "^/wsmobile/video/" { url.rewrite-once = ( ".*" => "/test/" ) proxy.server = ( "" => ( ( "host" => "210.200.144.26", "port" => 9091 ) ) ) } I've tried using the http["url"] directive, but lighttpd simply ignore those requests and continue to pass the full url to the secondary server, which of course chokes and throws 404s. However, if I do a global rewrite then everything gets forwarded to the secondary server, which is also not what I want. How do I go about this task?

    Read the article

  • Per-machine decentralised DNS caching - nscd/lwresd/etc

    - by Dan Carley
    Preface: We have caching resolvers at each of our geographic network locations. These are clustered for resiliency and their locality reduces the latency of internal requests generated by our servers. This works well. Except that a vast quantity of the requests seen over the wire are lookups for the same records, generated by applications which don't perform any DNS caching of their own. Questions: Is there a significant benefit to running lightweight caching daemons on the individual servers in order to reduce repeated requests from hitting the network? Does anyone have experience of using [u]nscd, lwresd or dnscache to do such a thing? Are there any other packages worth looking at? Any caveats to beware of? Besides the obvious, caching and negative caching stale results.

    Read the article

  • PySide / PyQt QStyledItemDelegate list in table

    - by danodonovan
    Dear All, I'm trying to create a table of lists in Python with Qt (PySide/PyQt - matters not) and my lists are squashed into the table cells. Is there a way to get the list delegates to 'pop out' of their cells? I've attached a simple code snippet - replace 'PySide' with 'PyQt4' depending on your preference from PySide import QtCore, QtGui class ListDelegate(QtGui.QStyledItemDelegate): def createEditor(self, parent, option, index): editor = QtGui.QListWidget(parent) for i in range( 12 ): editor.addItem('list item %d' % i) return editor if __name__ == '__main__': import sys app = QtGui.QApplication(sys.argv) model = QtGui.QStandardItemModel(2, 2) tableView = QtGui.QTableView() delegate = ListDelegate() tableView.setItemDelegate(delegate) tableView.setModel(model) for row in range(2): for column in range(2): item = QtGui.QStandardItem( 'None' ) model.setItem(row, column, item) tableView.setWindowTitle('example') tableView.show() sys.exit(app.exec_()) If you hadn't guessed I'm pretty new to this Qt lark Thanks in advance, Dan

    Read the article

  • How can maintain a SqlConnection open always

    - by Salvador
    How can maintain a SqlConnection (or using another component) open (connected) always during the execution of my .Net app? I need this because my app needs to detect using this commnad exec sp_who2 how many instances of my app are connected to mydatabase, to restrict the access (license control). example A) my app executed from location1 check the number of my apps connected to the sql server using exec sp_who2 if the number of my applications < MaxLicencesConnected then start my app and open a sqlconnection B) my app executed from location2 check the number of my apps connected to the sql server using exec sp_who2 if the number of my applications = MaxLicencesConnected then close my application sorry for my english. thanks in advance.

    Read the article

  • Lighttpd as a proxy to a https host

    - by Homer J. Simpson
    Hi, I am trying to set up a lighttpd as proxy from one server to another (which is running Apache/SSL), having trouble with the https part.. I want to be able to capture https requests and let the Apache server handle it, trying this: $SERVER["socket"] == ":443" { $HTTP["host"] == "www.mydomain.com" { proxy.server = ( "" => ( ( "host" => "123.123.123.123", # the Apache "port" => 443))) } } Normal port 80 requests are working fine.. What am I doing wrong ? Edit: Additionaly, error.log doesnt show anything.. Requests to https://www.mydomain.com are not finishing.

    Read the article

  • Calling activateWindow on QDialog sends window to background

    - by Stan
    I am debugging certain application written with C++/Qt4. On Linux it has problems that with certain window managers (gnome-wm/metacity), the main window (based on QDialog) is created in the background (it's not raised). I managed to re-create the scenario using PyQt4 and following code: from PyQt4.QtCore import * from PyQt4.QtGui import * import sys class PinDialog(QDialog): def showEvent(self, event): QDialog.showEvent(self, event) self.raise_() self.activateWindow() if __name__ == "__main__": app = QApplication(sys.argv) widget = PinDialog() app.setActiveWindow(widget) widget.exec_() sys.exit(0) If I remove self.activateWindow() the application works as expected. This seems wrong, since documentation for activateWindow does not specify any conditions under which something like this could happen. My question is: Is there any reason to have activateWindow in showEvent in the first place? If there is some reason, what would be good workaround for focusing issues?

    Read the article

  • android : how to run a shell command from within code

    - by ee3509
    I am trying to execute a command from within my code, the command is "echo 125 /sys/devices/platform/flashlight.0/leds/flashlight/brightness" and I can run it without problems from adb shell I am using Runtime class to execute it : Runtime.getRuntime().exec("echo 125 > /sys/devices/platform/flashlight.0/leds/flashlight/brightness"); However I get a permissions error since I am not supposed to access the sys directory. I have also tried to place the command in a String[] just in case spaces caused a problem but it didn't make much differense. Does anyone know any workaround for this ?

    Read the article

  • Prioritize One Network Share Over Another And/Or Cap Network Share Traffic? (Windows Server 2008 R2 Enterprise)

    - by FullTimeCoderPartTimeSysAdmin
    One of my fileservers is a hyper v VM running Windows Server 2008 R2 Enterprise. The NIC in the VM maps to a 1 GB NIC on the host that is dedicated just to this VM. I have two shares on the file server. One is very important and used by a few users. The other is less important but used by many users. The issue I'm having: When a ton of users are accessing the unimportant share, it can choke out requests to the important share. What I'd like to do: I'd like to some prioritize requests for for file on the more important share, or even dedicate a portion of the NIC's bandwidth just to requests for files on that share. Is there any way to do that? Alternately, can I add another NIC and specify that all traffic to one share goes over one NIC and traffic to the other share goes over the other?

    Read the article

  • Shaping outbound Traffic to Control Download Speeds with Linux

    - by Kyle Brandt
    I have a situation where a server makes lots of requests from big webservers all at the same time. Currently, I have not control over the amount of requests or the rate of the requests from the application that does this. The responses from these webservers is more than the internet line can handle. (Basically, we are launching a DoS on ourselves). I am going to get push to get this fixed at the application level, but for the time being, is there anyway I can use traffic shaping on the Linux server to control this? I know I can only shape outbound traffic, but maybe there is a way I can slow the TCP responses so the other side will detect congestion and this will help my situation? If there is anything like this with tc, what might the configuration look like? The idea is that the traffic control might help me control which packets get dropped before they reach my router.

    Read the article

  • Windows Server 2008 R2 DNS Server Intermittently Unresponsive

    - by Ablue
    Throughout the day out DNS servers (2x Win 2k8 R2 servers) are unable to respond to requests. The requests that fail are all on the .root zone that are either cached or obtained from 1 of 5 DNS servers we forward to before going to root hints. At first I thought the DNS servers we were forwarding to were flaky. So I added some more in. Currently the forwarding list looks like ISP DNS 1 OPEN DNS 1 ISP DNS 2 OPEN DNS 2 ISP DNS 3 I have tried: Turning off root hints. Set record scavenging to 7 days. Using dnscmd /config /EnableEDNSProbes 0 as per this. Packet capture at the DNS server shows that there is a lot of query responses with server failure between lan clients and the local dns server; it does not appear to be forwarding those requests. So maybe a problem with caching? Anyhow, does anything have anything I can try to get this working?

    Read the article

  • Python Vector Class

    - by sfjedi
    I'm coming from a C# background where this stuff is super easy—trying to translate into Python for Maya. There's gotta' be a better way to do this. Basically, I'm looking to create a Vector class that will simply have x, y and z coordinates, but it would be ideal if this class returned a tuple with all 3 coordinates and if you could edit the values of this tuple through x, y and z properties, somehow. This is what I have so far, but there must be a better way to do this than using an exec statement, right? I hate using exec statements. class Vector(object): '''Creates a Maya vector/triple, having x, y and z coordinates as float values''' def __init__(self, x=0, y=0, z=0): self.x, self.y, self.z = x, y, z def attrsetter(attr): def set_float(self, value): setattr(self, attr, float(value)) return set_float for xyz in 'xyz': exec("%s = property(fget=attrgetter('_%s'), fset=attrsetter('_%s'))" % (xyz, xyz, xyz))

    Read the article

  • appcfg.py upload_data entity kind problem

    - by Dingo
    Hi, I am developing application on app-engine-path and I would like to upload some data to datastore. For example I have a model models/places.py: class Place(db.Model): name = db.StringProperty() longitude = db.FloatProperty() latitude = db.FloatProperty() If I save this in view, kind() of this entity is "models_place". All is ok, Place.all() in view work fine. But: If I upload some next row using appcfg.py upload_data, the kind() of this entities is Place. loader.py look like this: import datetime, os, sys from google.appengine.ext import db from google.appengine.tools import bulkloader libs_path = os.path.join("/home/martin/myproject/src/") if libs_path not in sys.path: sys.path.insert(0, libs_path) from models import places class AlbumLoader(bulkloader.Loader): def __init__(self): bulkloader.Loader.__init__(self, 'Place', [('name', lambda x: x.decode('utf-8')), ('longitude', float), ('latitude', float), ]) loaders = [AlbumLoader] and command for uploading: python /usr/local/google_appengine/appcfg.py upload_data --config_file=places_loader.py --kind=models_place --filename=data/places.csv --url=http://localhost:8000/remote_api /home/martin/myproject/src/

    Read the article

  • Weighted round robins via TTL - possible?

    - by Joe Hopfgartner
    I currently use DNS round robin for load balancing, which works great. The records look like this (I have a ttl of 120 seconds) ;; ANSWER SECTION: orion.2x.to. 116 IN A 80.237.201.41 orion.2x.to. 116 IN A 87.230.54.12 orion.2x.to. 116 IN A 87.230.100.10 orion.2x.to. 116 IN A 87.230.51.65 I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the ip address. However if the userbase is big enough (spreads over multiple ISPs etc) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%. However now I have the problem that I am introducing more servers into the systems, that not all have the same capacities. I currently only have 1gbps servers, but I want to work with 100mbit and also 10gbps servers too. So what I want is I want to introduce a server with 10 GBps with a weight of 100, a 1 gbps server with a weight of 10 and a 100 mbit server with a weight of 1. I used to add servers twice to bring more traffic to them (which worked nice. the bandwidth doubled almost.) But adding a 10gbit server 100 times to DNS is a bit rediculous. So I thought about using the TTL. If I give server A 240 seconds ttl and server B only 120 seconds (which is about about the minimum to use for round robin, as a lot of dns servers set to 120 if a lower ttl is specified.. so i have heard) I think something like this should occour in an ideal scenario: first 120 seconds 50% of requests get server A -> keep it for 240 seconds. 50% of requests get server B -> keep it for 120 seconds second 120 seconds 50% of requests still have server A cached -> keep it for another 120 seconds. 25% of requests get server A -> keep it for 240 seconds 25% of requests get server B -> keep it for 120 seconds third 120 seconds 25% will get server A (from the 50% of Server A that now expired) -> cache 240 sec 25% will get server B (from the 50% of Server A that now expired) -> cache 120 sec 25% will have server A cached for another 120 seconds 12.5% will get server B (from the 25% of server B that now expired) -> cache 120sec 12.5% will get server A (from the 25% of server B that now expired) -> cache 240 sec fourth 120 seconds 25% will have server A cached -> cache for another 120 secs 12.5% will get server A (from the 25% of b that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of b that now expired) -> cache 120 secs 12.5% will get server A (from the 25% of a that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of a that now expired) -> cache 120 secs 6.25% will get server A (from the 12.5% of b that now expired) -> cache 240 secs 6.25% will get server B (from the 12.5% of b that now expired) -> cache 120 secs 12.5% will have server A cached -> cache another 120 secs ... i think i lost something at this point but i think you get the idea.... As you can see this gets pretty complicated to predict and it will for sure not work out like this in practice. But it should definitely have an effect on the distribution! I know that weighted round robin exists and is just controlled by the root server. It just cycles through dns records when responding and returns dns records with a set propability that corresponds to the weighting. My DNS server does not support this, and my requirements are not that precise. If it doesnt weight perfectly its okay, but it should go into the right direction. I think using the TTL field could be a more elegant and easier solution - and it deosnt require a dns server that controls this dynamically, which saves resources - which is in my opinion the whole point of dns load balancing vs hardware load balancers. My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records? Edit: The system is a forward proxy server system. The amount of Bandwidth (not requests) exceeds what one single server with ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers. Are there any alternative methods than using DNS? Of course I can use a load balancer with fibre channel etc, but the costs are rediciulous and it also increases only the width of the bottleneck and does not eliminate it. The only thing i can think of are anycast (is it anycast or multicast?) ip addresses, but I don't have the means to set up such a system.

    Read the article

  • Is possible to auto-import a module from a diferent subfolder in other subfolder?

    - by mamcx
    I have a kind of plugin system, with this layout: - Python -- SDK -- Plugins ---- Plugin1 ---- Plugin2 All 3 have a __init__.py file. I wonder if is possible to be able to do import SDK from any plugin (as if SDK was in the site-packages folder). I'm in a situation where need to deploy, update, delete, add or change SDK files or any of the plugins under non-admin accounts, and wonder if I can get SDK in a clean way (I could sys.path.append in all plugins but I wonder if exist a better option). I imagine that using this in the Plugins init coulkd work: import sys import os ROOT_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__),'..')) print ROOT_DIR sys.path.append( ROOT_DIR ) But clearly is not executed this code (I imagine init was auto-magicalled executed in the load of the module :( )

    Read the article

  • How to secure Firefox traffic (+DNS) through SOCKS proxy under Ubuntu 10.04?

    - by Maarx
    I'm using Ubuntu 10.04, and starting a SOCKS proxy with 'ssh -D', and setting Ubuntu to use it with "System - Preferences - Network Proxy". Firefox uses the proxy, and the proxy's IP appears when I visit a site like http://www.whatismyip.com/. My question is, is Firefox resolving DNS requests through this proxy? Is my web-browsing truly secure? (That is, until I exit the other end of the proxy. I know it's insecure after that.) (And I've verified the keys, I'm not being man-in-the-middled) (And--screw it. You know what I mean. Is it resolving DNS requests through the proxy?) I don't know how I would go about verifying such a thing for myself. Using additional hardware such as another debugging proxy is not an option. If Firefox isn't resolving my DNS requests through the SOCKS proxy, how do I go about fixing it?

    Read the article

  • graphviz segmentation fault

    - by LucaB
    Hi I'm building a graph with many nodes, around 3000. I wrote a simple python program to do the trick with graphviz, but it gives me segmentation fault and I don't know why, if the graph is too big or if i'm missing something. The code is: #!/usr/bin/env python # Import graphviz import sys sys.path.append('..') sys.path.append('/usr/lib/graphviz') import gv # Import pygraph from pygraph.classes.graph import graph from pygraph.classes.digraph import digraph from pygraph.algorithms.searching import breadth_first_search from pygraph.readwrite.dot import write # Graph creation gr = graph() file = open('nodes.dat', 'r') line = file.readline() while line: gr.add_nodes([line[0:-1]]) line = file.readline() file.close() print 'nodes finished, beginning edges' edges = open('edges_ok.dat', 'r') edge = edges.readline() while edge: gr.add_edge((edge.split()[0], edge.split()[1])) edge = edges.readline() edges.close() print 'edges finished' print 'Drawing' # Draw as PNG dot = write(gr) gvv = gv.readstring(dot) gv.layout(gvv,'dot') gv.render(gvv,'svg','graph.svg') and it crashes at the gv.layout() call. The files are somthing like: nodes: node1 node2 node3 edges_ok: node1 node2 node2 node3

    Read the article

  • PyQt4: Why does Python crash on close when using QTreeWidgetItem?

    - by Rini
    I'm using Python 3.1.1 and PyQt4 (not sure how to get that version number?). Python is crashing whenever I exit my application. I've seen this before as a garbage collection issue, but this time I'm not sure how to correct the problem. This code crashes: import sys from PyQt4 import QtGui class MyWindow(QtGui.QMainWindow): def __init__(self, parent=None): QtGui.QMainWindow.__init__(self, parent) self.tree = QtGui.QTreeWidget(self) self.setCentralWidget(self.tree) QtGui.QTreeWidgetItem(self.tree) # This line is the problem self.show() app = QtGui.QApplication(sys.argv) mw = MyWindow() sys.exit(app.exec_()) If I remove the commented line, the code exits without a problem. If I remove the 'self.tree' parent from the initialization, the code exits without a problem. If I try to use self.tree.addTopLevelItem, the code crashes again. What could be the problem?

    Read the article

  • How to call ejabberdctl from PHP (Apache)

    - by Adil
    Hi, I am trying to call ejabberdctl from PHP but i keep getting an error code of 3 (Failed RPC connection to the node ejabberd@localhost: nodedown). My PHP script contains the following code to add friends : exec('sudo /opt/ejabberd-2.1.2/bin/ejabberdctl add_rosteritem adil.baig40122310029739 godudu.com chburaska0822431111022397 godudu.com chburaska0822431111022397 Friends both', $output, $retCode); exec('sudo /opt/ejabberd-2.1.2/bin/ejabberdctl add_rosteritem chburaska0822431111022397 godudu.com adil.baig40122310029739 godudu.com adil.baig40122310029739 Friends both', $output, $retCode); I have also added ejabberdctl to /etc/sudoers like so : # Custom entry for ejabberdctl, so it can be used via PHP www-data ALL= NOPASSWD: /opt/ejabberd-2.1.2/bin/ejabberdctl I have also added the ejabberd bin directory to /etc/environment, like so : PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/opt/ejabberd-2.1.2/bin" source /etc/environment Everytime i run the PHP script $retCode (the exec return code) returns 3, but if i run the same php file from the command line it works. Help!

    Read the article

  • How to check if a Statistics is auto-created in a SQL Server 2000 DB using T-SQL?

    - by The Shaper
    Hi all. A while back I had to come up with a way to clean up all indexes and user-created statistics from some tables in a SQL Server 2005 database. After a few attempts it worked, but now I gotta have it working in SQL Server 2000 databases as well. For SQL Server 2005, I used SELECT Name FROM sys.stats WHERE object_id = object_id(@tableName) AND auto_created = 0 to fetch Statistics that were user-created. However, SQL 2000 doesn't have a sys.stats table. I managed to fetch the indexes and statistics in a distinguishable way from the sysindexes table, but I just couldn't figure out what the sys.stats.auto_created match is for SQL 2000. Any pointers? BTW: T-SQL please.

    Read the article

  • The dictionary need to add every word in SpellingMistakes and the line number but it only adds the l

    - by Will Boomsight
    modules import sys import string Importing and reading the files form the Command Prompt Document = open(sys.argv[1],"r") Document = open('Wc.txt', 'r') Document = Document.read().lower() Dictionary = open(sys.argv[2],"r") Dictionary = open('Dict.txt', 'r') Dictionary = Dictionary.read() def Format(Infile): for ch in string.punctuation: Infile = Infile.replace(ch, "") for no in string.digits: Infile = Infile.replace(no, " ") Infile = Infile.lower() return(Infile) def Corrections(Infile, DictWords): Misspelled = set([]) Infile = Infile.split() DictWords = DictWords.splitlines() for word in Infile: if word not in DictWords: Misspelled.add(word) Misspelled = sorted(Misspelled) return (Misspelled) def Linecheck(Infile,ErrorWords): Infile = Infile.split() lineno = 0 Noset = list() for line in Infile: lineno += 1 line = line.split() for word in line: if word == ErrorWords: Noset.append(lineno) sorted(Noset) return(Noset) def addkey(error,linenum): Nodict = {} for line in linenum: Nodict.setdefault(error,[]).append(linenum) return Nodict FormatDoc = Format(Document) SpellingMistakes = Corrections(FormatDoc,Dictionary) alp = str(SpellingMistakes) for word in SpellingMistakes: nSet = str(Linecheck(FormatDoc,word)) nSet = nSet.split() linelist = addkey(word, nSet) print(linelist) # # for word in Nodict.keys(): # Nodict[word].append(line) Prints each incorrect word on a new line

    Read the article

  • Dummy HTTP server for debugging

    - by Andrea
    This is more or less the inverse of my previous question. I need to debug some HTTP requests that I am making. Since these requests arise from the use of some external libraries, sometimes I am not sure of what is the actual data I am sending. Is there some dummy server (for Linux) that accepts HTTP requests and just prints them somewhere so that I can inspect them? I would like to be able to see in plain text the full request, like POST /foo HTTP/1.1 Host: www.example.com Accept: text/xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Language: en-gb,en;q=0.5 Content-Type: text/plain Content-Length: 11 Hello world

    Read the article

  • Avoiding NPE in trait initialization without using lazy vals

    - by 0__
    This is probably covered by the blog entry by Jesse Eichar—still I can't figure out how to correct the following without residing to lazy vals so that the NPE is fixed: Given trait FooLike { def foo: String } case class Foo( foo: String ) extends FooLike trait Sys { type D <: FooLike def bar: D } trait Confluent extends Sys { type D = Foo } trait Mixin extends Sys { val global = bar.foo } First attempt: class System1 extends Mixin with Confluent { val bar = Foo( "npe" ) } new System1 // boom!! Second attempt, changing mixin order class System2 extends Confluent with Mixin { val bar = Foo( "npe" ) } new System2 // boom!! Now I use both bar and global very heavily, and therefore I don't want to pay a lazy-val tax just because Scala (2.9.2) doesn't get the initialisation right. What to do?

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >