Saturday, May 31, 2014

May

It was a busy May.

First, I've been to WATA 2014: 7th International Workshop on Weighted Automata: Theory and Applications, from 5th May till 9th May, where I had a talk on synchronizing weighted automata, about which I already had a post a few months ago. There were several interesting talks during the (dense) week, check the details here (in Hungarian). Then, I went to Athens for another week for doing scientific cooperation - it turned out the topic is a bit difficult to me to grasp, but it was a quite useful visit anyway. After that, AFL 2014 had been organized (in Szeged, so this time I did not have to travel anywhere) - on this conference I had a similar but different talk on the same synchronization topic, see the paper (completely free to read). I've been a bit sick on that week, though, so I could not really participate in the first part of the conference but you can read the details here (in Hungarian). Moreover, we received the notification from DCFS 2014 that our paper on biclique coverings, rectifier networks and their direct application to epsilon-removal of NFAs had been accepted, and we had to compile a camera-ready version of the submission according to the referees' (quite positive) suggestions.
That's okay for one month, but it's not the best thing that most conferences tend to have deadlines within a very short time frame...

Saturday, April 26, 2014

PCP -- undecidability on four tiles?

Hey, someone please check this.
Claims to have shown PCP is already undecidable for FOUR tiles.

Tuesday, March 18, 2014

P is not NP, again

today's posting: http://arxiv.org/abs/1403.4143
it's maybe this year's first P!=NP posting (there were already three P=NP, at least)

okay, get back to science

Saturday, March 15, 2014

(Location-)synchronizing weighted automata

Back on the synchronization topic. I've extended the notion of synchronizability of nondeterministic automata to automata with transitions weighted in an arbitrary semiring.

For preliminaries, a semiring is an algebra K=(K,+,*,0,1) where (K,+,0) is a commutative monoid with neutral element 0, (K,*,1) is a (not necessarily commutative) monoid with neutral element 1, with 0 being an absorbing element and * distributing over +, i.e.,

0a = a0 = 0,
(a+b)c = ac + bc,
a(b+c) = ab + ac

for each a, b, c in K. One usually assumes |K|>1 in order to avoid trivial cases, which in turn implies 0 is different from 1.

As an example, the naturals, the integers, the (nonnegative) rationals and the (nonnegative) reals form semirings with the standard addition and multiplication. Among these guys the integers, rationals and reals form a ring since their additive structure is a group (have inverses). Another useful example is the Boolean semiring {0,1} with addition being disjunction and product being conjunction, i.e. 1+1 = 1 (the other values come from the definition directly: 1+0 = 0+1 = 1 and 0+0 = 0 since 0 is neutral for +, 1*1 = 1, 0*1 = 0, 1*0 = 0 since 1 is neutral for *, 0*0 = 0 since 0 is absorbing for *).

Given a semiring K, and an alphabet A (that is, a finite nonempty set), a K-weighted A-automaton is simply a collection {Ma}a in A of nxn matrices Ma for some integer n>0, with entries in K. That is, to each letter a in the alphabet, a matrix Ma in Knxn is associated.

Usually one also gives an initial and a final vector but that's not needed in the synchronization framework so we omit it now.

Now to each word w=a1...ak a matrix Mw is associated in the uniform way Mw := Ma1Ma2Ma3...Mak. Note that this is the unique homomorphism from A* to Knxn extending the Ma guys.
We call a word w location-synchronizing if for some column index i, the matrix Mw has only zeros but in column i, and in column i each entry is nonzero. If even the entries in column i are all equal, we call w synchronizing. The matrix Mw itself is also called location-synchronizing (synchronizing, resp.)
As examples,
( 1 0 0 )
( 1 0 1 )
( 1 0 0 )
is not (location-)synchronizing since it has nonzero entries in two columns,
( 1 0 0 )
( 0 0 0 )
( 1 0 0 )
is not (location-)synchronizing since it has nonzero entries only in one column (the first one) but in this column a 0 also appears,
( 1 0 0 )
( 2 0 0 )
( 1 0 0 )
is location-synchronizing but not synchronizing and
( 2 0 0 )
( 2 0 0 )
( 2 0 0 )
is synchronizing.

For a semiring K, the K-synchronizability and K-location synchronizability problems are the following:

given a K-weighted automaton, does it possess a (location-)synchronizing word?

In other words, given a finite list of matrices M1, M2, ..., Mk in Knxn, does there exist a (location-)synchronizing matrix of the form M = Mi1Mi2...Mit for some t>0?

Now in the Boolean semiring B, both problems are the same since a column of nonzero elements can only be a column of ones, thus in B, every location-synchronizing matrix is also synchronizing.
It's known for a few years (check the authors' researchgate address here) that B-synchronizability is PSPACE-complete, already for partial automata, i.e. the case where in each matrix, each row contains at most one 1.

Thus we can deduce easily that K-synchronizability and K-location synchronizability is PSPACE-hard for every semiring K.

I've proven only a few things so far:
  • If K is finite, then both problems are in PSPACE.
  • If K is locally finite and the operators + and * are computable, then both problems are decidable.
  • If K is both zero-sum-free and zero-divisor-free, then K-location synchronizability is in PSPACE, thus is PSPACE-complete.
  • If the freely generated semigroup over two generators, i.e. {a,b}+ can be embedded into the multiplicative structure of K, then K-synchronizability is undecidable. (Note that in this case K-location synchronizability can be still in PSPACE, e.g. when K itself is the language semiring P({a,b}*) with union as addition and concatenation as product.)
  • If the matrix mortality problem is undecidable for K, then both problems are undecidable for K. (that's the case for the integers, showing zero sum can imply undecidability for location synchronizability in semirings that are not locally finite.)
  • If the naturals with the standard operations can be embedded into K  (i.e. when 0, 1, 1+1, 1+1+1, ... are all pairwise different), then K-synchronizability is undecidable.
Personally I think that this last result is a bit surprising (since e.g. mortality is decidable for the naturals) and its proof is slightly more involved than the others.

More details soon (where "soon" means "within two weeks" now.. really :-) )

Friday, February 28, 2014

Nonpermutational stuff, again

So.
We've seen that when S is a subsemigroup of Tn, whose members have exactly one fixed point each, then we can group them by their fixed points: let Si be the subsemigroup of S consisting of those guys having i as fixed point.
We've also seen that Si is indeed a semigroup and for each i, there exists a partial ordering <i of [n] such that when pf=q for some f in Si, then p<iq. Except when p=i. This <i has i as maximal element.
Without loss of generality we can assume that

  • Sn is (one of) the largest among those Si guys and
  • <n is consistent with the standard <,
i.e. for each f in Sn, p<pf whenever p<n.

We show that if Sn is "large enough" then the other Si guys are "small". Namely,
Lemma. Suppose for each i<j, i'<j' with i'<>j' there exists some f in Sn with if=j, i'f=j'.
Then for each i and f in Si we have

  1. if j<i, then j < jf;
  2. if i<=j, then i=jf.
Observe that this lemma implies that |Si| <= (n-1)! / (n-i)! since one can choose the value only for the elements smaller than i some larger value and all the others have to be mapped to i. Summing these values we get an upper bound e(n-1)! for the size of these semigroups.
The other option is that Sn does not satisfy the conditions for the lemma. It's routine to check that in this case |Sn| <= (n-1)! - (n-3)! (this bound is sharp). Since Sn was chosen to be the largest, we have that |S| <= n((n-1)! - (n-3)!).
Now for the proof of the lemma...

  1. For i = n, the claims get satisfied: guys below n get elevated and n is mapped to n, okay. So assume i < n. And let f be a member of Si.
  2. Then, nf < n (since n is not a fixed point of f).
  3. Assume pf < p for some p and that pf <> nf. Then by the assumption, nfg = n and pfg = p for some g in Sn. Thus the composition fg has two fixed points p and n, a contradiction. Hence if pf < p, then pf = nf. In particular, if p < nf, then p <= pf. (That is, all backward arrows share their target.)
  4. Assume nf < i. Note that if = i. Then by assumption nfg = i and ig=n for some g in Sn, and fgfg has two fixed points n and i, a contradiction. Thus, i <= nf.
  5. Assume i < nf. Also nfffff..f is eventually i < nf (since f is nonpermutational with target i), hence there exists some smallest k such that nf^{k+1} < nf. However, this involves a second backward jump with target different from nf, a contradiction. Thus i = nf.
  6. Observe that 3, part b and 5 together prove Statement 1.
  7. Assume i < j < jf. Then for some g, jfg = n and ig = j and fgfg has two fixed points n and j, a contradiction. Hence if i < j, then jf < j, thus jf = i, proving Statement 2.
That's all. It turned out to be not that horrible at all...
...I'd guess the bound is not strict. I would be surprised if some S with |S| >= 3(n-1)! could be actually constructed...

Thursday, January 30, 2014

NP is in DTIME(n^{polylogn})? or not?

While I've been slicing chicken, my phone fetched today's arXiv posts in my topics of interests and I bumped into this. While it's not the "usual" P=NP stuff, I'm still quite sceptical..
..however, if it's correct, that has several serious impacts on complexity theory. And the author has several strong publications..

..worth a read. who knows.

Monday, January 27, 2014

On the number of nonpermutational transformations

So for recap, a transformation f: [n]→[n] is nonpermutational ([n] stands for the set {1,...,n} here) if f(X)=X implies |X|=1 for any nonempty subset X of [n].

(These guys are interesting now since a reduced automaton recognizes a definite language if and only if every nonempty word induces a nonpermutational transformation on the set of states, but it's not important for now.)

For example, f=(2,3,3) and g=(1,1,2) are nonpermutational (we write transformations of [n] also as vectors, the above f is defined as f(1)=2, f(2)=3 and f(3)=3), but their product fg = (1,2,2) is not since fg({1,2})={1,2}. Product here is function composition, i.e. fg(x)=g(f(x)). Sorry for the ordering: when f and g are words inducing some transformation on the state set of an automaton, then fg, the concatenation of f and g in this order induces a function that way: first we apply (i.e. "read") f, then g.
That's why we will write pf instead of f(p) for a p in [n].

Thinking a bit, the following are equivalent for a function f:[n]→[n]:
  1. f is nonpermutational;
  2. k is constant for some k;
  3. the graph of f is a rooted tree with edges directed towards the root and with a loop attached to the root.
Here "the graph of f" is the one with vertex set [n] and p→q is an edge if and only if pf=q. Thus, each node has outdegree one. Since the graph is finite, this implies the existence of at least one cycle.

Clearly, since if f is any transformation of [n], then f(X)=X with X being the (nonempty) set of all nodes lying on some cycle. Thus if f is nonpermutational, then the above X has only one element, hence there is only one vertex lying on a cycle -- which has to be a loop then. Removing this loop we get a DAG (a directed acyclic graph), with each node but the originally looped one having outdegree one, showing 1) implies 3). Now 3) implies 2) since each application of f takes each vertex closed to the root and thus n maps each vertex to the root. Finally, 2) implies 1) since if Xf=X, then Xf k=X as well. Since k is by assumption a constant, X is a singleton, thus f is indeed nonpermutational.

In the following, when is a nonpermutational transformation, Fix(f) denotes its unique fixed point.

We are interested in the following question: given n, how large can a semigroup of nonpermutational transformations of [n] be at most? Since the set of all nonpermutational functions do not form a semigroup, it's not that easy as counting all the nonpermutational functions but the answer is a hell less number.

Trivial bound: nn. It's the total number of transformations...
A slightly lower but still trivial bound: n(n-2). It's (by Cayley's formula) the number of rooted trees on the vertex set [n], so it's the total number of nonpermutational transformations. But, they do not form a semigroup...
A somewhat more carefully designed bound: n!. To see this, let S be a semigroup consisting of nonpermutational transformations only. Let Si, for i in [n], stand for the set of those members f of S with Fix(f)=i. Then each Si is also a semigroup (since if p is a fixed point of f and g, then it is the fixed point of fg as well). Now for i in [n], let us consider the graph Gi with vertices [n] and (p,q) being an edge iff pf=q for some f in Si. Observe that since Si is a semigroup, Gi is transitive (if f maps p to q and g maps q to r then fg maps p to r). Clearly i is a sink of Gi (there is a loop attached on i and no any other outgoing edges). Let's remove this loop and let Gi' stand for the resulting graph. Suppose there is a cycle in Gi'. Then by transitivity, there is a loop in Gi' as well (removing the loop from the sink does not affect transivity), implying that there is a function in Si having a fixed point different from i which is nonsense by definition of Si. Hence Gi' is a transitive DAG -- a strict partial ordering of [n] with maximal element i. Let < be an arbitrary linear extension of this partial order. Then any member f obeys p<pf for p<>i and if=i. The total number of such f's is (n-1)!: one can choose (n-1) different values for the first element, (n-2) for the second and so on. Thus, |Si| is at most (n-1)! and since S is the disjoint union of n such Si's we get the upper bound n(n-1)!=n!.

Next time we reduce this bound a bit by a horrible case analysis to get an upper bound n((n-1)!-(n-3)!)...

stay tuned

Monday, November 25, 2013

Friday, November 22, 2013

On NL vs UL

Just for recap: a (decision) problem is in NL if some logspace-bounded nondeterministic machine can decide it. It's in UL if in addition the machine has at most one accepting path for each input. (U is for unambigous here.)

I have not that deep insight in the topic, but it seems to me that NL = UL is highly possible. However, to understand the problem statement of this post, you don't have to know anything about nondeterminism or unambiguity; it suffices to know what logspace computability means.
In order to show this it suffices to define an algorithm f that
  • takes a directed graph G=(V,E) as input;
  • assigns a positive integer weight w(e) to each edge e of G;
  • such that w(e) is bounded by some polynomial of |V|;
  • such that for each pair (s,t) of nodes with t being reachable from s there exists a unique path from s to t having minimum weight;
  • finally, f has to be computable in logspace.
E.g, without the polynomially-boundedness condition a good f could be one which assigns distinct powers of two to the edges, i.e. edge i has weight 2i.

The thing would work since the "min-unique" problem (given (G,s,t), say "yes" if t is reachable from s via a unique path, say "no" if t is not rechable from s at all, say anything if there are more s-t paths) is in UL and UL is closed under logspace reductions, thus one has "only" to reduce the NL-complete reachability problem (in dags, or in strictly layered graphs, whatever) to min-uniqueness. Having these weights in logspace, each weighted edge e can be replaced by a path of length w(e), concluding the reduction.

For more details see Raghunath Tewari's PhD dissertation from 2011.

Saturday, October 12, 2013

Fav Problem: A 30-year-old conjecture of Zsolt Tuza

Okay, so this one isn't too deeply attached to TCS, but anyways.
Suppose G is a (finite) graph in which we can choose k edge-disjoint triangles but not more.
We want to make the graph triangle-free by removing edges, as few as possible. How many edges suffice?
Trivial observation #1: we have to remove at least k edges -- one from each triangle from a k-element edge-disjoint set of triangles.
Trivial observation #2: 3k edges surely suffice -- fix a maximal edge-disjoint set of triangles and remove all of their edges. (No triangle can remain intact, otherwise the set could not be maximal).
Now K4 is an example for a lower bound of 2k: from K4 we can choose at most one edge-disjoint triangle and we have to remove two edges to eliminate all triangles. (A disjoint sum of k K4s shows an example where k goes to infinity.)
The conjecture states that removing 2k edges always suffices.
Some progress on that: the conjecture holds (among other classes) for...
  • planar graphs, Tuza 1985
  • tripartite graphs, Haxell 1993
  • K3,3-subdivision-free graphs, Krivelevich 1995
  • graphs with maximum averagre degree at most 7, Puleo 2013.
I'm interested in the (correctness of the) last result, see here: http://arxiv-web3.library.cornell.edu/abs/1308.2211
anyone else? :-)

Saturday, September 28, 2013

Minimum-weight perfect matching in Android system code

Heh.. I've just bounced into this article:
http://www.troodon-software.com/Algorithms/Android/2010/06/20/euclidean-bipartite-matching-multi-touch-and-android/

Check this, it's funny :) Briefly, Android's multitouch dragging uses a function that tries to solve a bipartitate matching problem (aim: track fingers). The function is a greedy algorithm being awkward sometimes. Also, a comment refers to an article in which a near-linear approximation algorithm is developed.

It's maybe just me, but..

  • the greedy algorithm seems to be a quick hack and
  • the complicated near-linear approximation is an overkill?
I mean.. come on, how large can n, the number of points be in this task? Most people have ten fingers only :-P at this scale, it's not the asymptotic that matters, the constants also play a significant role. So - I would go with the Hungarian method, it's easy to implement and worst-case n^3 is not a big deal at this scale.. I guess.

Bottom line: take care with the hidden constants that hide in the big O. Especially when they are comparable with the input size.

Vickrey is the new Max

Probably most of you already have encountered some kind of an auction system (e.g. buying something on Amazon, eBay etc). There is an item to be sold, a number n of agents, agent i thinks the item worth v_i for him. Of course if this data was available for everyone, then there would be no need for an auction: the agent i who maximizes v_i gets the stuff and pays v_i.
However, such information is sensitive. Thus, during the auction the agents offer bets (either all at once, or sequentially, in one turn or in more turns) and the winner is deduced from the amount he bet.

A possibility for a one-turn sealed-bid auction (sealed-bid means that the agents do not know each other's bets, only the auctioner does) is that the auctioner collects the bids b_i from each agent i, e.g. in sealed envelopes, and agent i who maximizes b_i gets the stuff in exchange for his bid. This is the simplest model and the easiest one to understand.

However, it suffers from the so-called winner's curse: informally, the winner will be almost always unhappy with the outcome. Say there are three agents bidding 1, 2 and 10 respectively. Then Agent 3 will get the stuff for the price of 10. He could have also get the stuff for the price of 3 -- or, if not only integers are in play, for any real number being strictly greater than 2. Thus he can view the situation as "alas, I just lost 8 dollars, I should have bet only 2$ and one cent" which probably makes him unhappy.

There is a simple way for avoiding the winner's curse by employing the Vickrey mechanism: agent i maximizing gets the stuff but he has to pay only the second-highest price. In the previous example, Agent 3 gets the stuff but it costs him only two dollars.
The Vickrey mechanism has numerous other advantages, including:

  • it gives an incentive for truthful bidding: each (selfish!) agent now plays his best strategy by bidding exactly b_i := v_i
  • thus, it's also decision efficient: Agent i maximizing v_i gets the stuff.
(The system is vulnerable to shill bids, though.)
Things get much more complicated when there are more items to sell (either identical or not; an agent can either bet item-wise or for a set of items etc). In that case, complexity and economical problems arise.

Anyone wants to check the literature and write an essay on that? ;-)

Saturday, August 31, 2013

What makes some instances of hard probems hard and others rather easy?

We all know that there exist easily solvable instances (with respect to some algorithm, at least) of NP-complete problems, e.g. sudoku puzzles found in daily newspapers etc. And there exist hard ones of course, from NP-hardness (check e.g. this one).
But what makes these instances "easy" and "hard"? A convenient answer is "it depends on the solver algorithm", but it seems to be a more universal notion.
For example, there is a well-known phase transition in the case of the randomly generated 3SAT instances: suppose we generate clauses over n variables, each having three literals. Now,

  • if k is less than 4.25n (approximately), then the generated formula is satisfiable with high probability due to underconstraintedness -- and most solvers can generate a solution within a low runtime;
  • if k is more than 4.25n (again, approximately), then the generated formula is whp overconstained and most solvers can deduce a contradiction within a low runtime,
  • however,
  • if k is around 4.25n, then deciding whether the formula is satisfiable or not is hard for all the solvers available. (that's why benchmarks and testbeds are generated exactly this way, usually).
So, these generated instances are "universally hard" -- but what does universality mean here? Is there a mathematical notion for measuring hardness of problem instances?
Research in this direction can be used to have a better understanding for the structure of NP-hard problems.

Thus it's probably very, very hard.

Any willing BSc / MSc student there? ^_^

Monday, July 29, 2013

Cov(G) vs Rect(G)

Let G=(A,B,E) be a bipartite graph. Cov(G) stands for its biclicque weight: the total weight of a covering of G with bicliques, where the size of a biclique is the number of its vertices. Rect(G) stands for the edge count of the smallest directed graph (V,E') with A U B being a subset of V, and where for each a in A and b in B, b is accessible from a if and only if (a,b) is in E.

The question is, what's the exact relationship between Rect(G) and Cov(G)? Clearly Rect(G) <= Cov(G) <= Rect(G)2, but we should close the gaps.

Saturday, July 20, 2013

Largest semigroup of "nonpermutational" transformations

Let Tn be the set of all transformations of [n]={1,...,n} for each n>0. Tn is a semigroup with product being function composition, i.e. (fg)(x) := g(f(x)) for each f,g Tn and x ∈ A. A subsemigroup of Tn is a subset of Tn which is closed under composition.
Call a transformation f: A→A nonpermutational if f(B)={f(x): x∈B}=B implies |B| ≤ 1 for any BA. (Equivalently, if A is finite, then f is nonpermutational if it has a unique fixed point Fix(f) and no more cycles saving the loop on Fix(f).)
The nonpermutational transformations of [n] do not form a semigroup. The question is, if T is a subsemigroup of Tn consisting of only nonpermutational transformations, how large can T be?

Partial results so far:

Probably there will be a follow-up on this subject soon -- stay tuned... ;-)

Thursday, June 27, 2013

Uniquely solvable puzzles

We know that most "interesting" one-player games (or puzzles) are NP-hard. Read as:
given a puzzle, it's hard to decide whether it has a solution or not.

This question (seemingly) has little to do with the following one:
given a puzzle which is guaranteed to have exactly one solution, compute the solution.

Note that the standard reduction techniques do not preserve the number of solutions (e.g. no UP-complete problem is known as of today). I know that if we drop "exactly" (and write "at least" instead), then we get the class TFNP. But I do not know how this class of "uniqely solvable" problems is called and I'm interested in the current state of the art in this topic :-)

I could use someone to collect and read the literature for me ;-)

Wednesday, June 26, 2013

Addition-substraction chains

While implementing yet another iterated squaring algorithm (yep, this one) and thinking on possible optimizations I've bounced into addition-substraction chains.

An addition-substraction chain for some positive integer N is a list of positive integers which

  • begins with 1,
  • all the other elements are either the sum or the difference of two earlier members (which can be same in the case of addition) of the list,
  • ends with N.
For example, 1,2,4,8,16,32,64,60 is an addition-substraction chain of length 8 for 60.

The question is to determine the complexity of the following question: given an integer N, what is the length of the shortest addition-substraction chain for N?

(Of course it's somewhere around O(lgN).)

Monday, June 24, 2013

Fav Problem #2: R(5,5)

How many vertices can a graph G have at most such that no complete subgraph of 5 nodes appear neither in G, nor in its complement?

The answer is somewhere between 43 and 49.

For more details on Ramsey numbers see this:

Decidability of dot-depth 2

The following problem is well-known and still open for around four decades in the TCS community.

Focusing only on the problem statement: a language L⊆Σ* is of dot-depth two if it is a Boolean combination of languages of the form Σ1*a1Σ2*a2...Σk*akΣk+1* for some Σi⊆Σ and ai∈Σ. For example, (a+b)*a(b+c)*∩a*b(a+b)* is of dot-depth two.

The question: is the following problem decidable?

  • Input: a finite automaton recognizing some language L.
  • Output: is L of dot-depth two?
Note: for the case of piecewise testable languages, that is, the case with Σi=Σ for each i, there exist several polytime algorithms, the most recent being this one:

http://link.springer.com/chapter/10.1007%2F978-3-642-38771-5_26

It would be interesting to see whether this new proof can be changed to cover a broader class of languages.

Fav Problem #1: Directing partial automata


A partial automaton is a system M=(Q, A, δ ) with Q being the finite nonempty set of states, A being a finite nonempty alphabet and δ being a partial function from Q×A to Q. (meaning something like "if I'm in q and get the symbol a on input, I will be in q'." if δ(q,a)=q' and "if I'm in q and get the symbol a on input, I'm stuck." if δ(q,a) is undefined). Instead of δ(q,a) we often write simply qa.
The action is extended to words as usual, setting qua := (qu)a. Of course if qu is undefined, then so is qua .

A word w is a directing word of M if pw=qw is defined for each state p,q in Q. In other words, if M gets w on input, no matter in which state M is, after processing w it will be in a fixed state. M is called directable if it possesses at least one directing word.

The question can be stated as follows:
Given an n-state directable partial automaton, how long can be its shortest directing word at most?

Known bounds:
  1. For each n there exists a directable n-state partial automaton whose shortest directing word has length Ω(3n/3). See P.V. Martyugin, Lower bounds for length of carefully synchronizing words, Presented at Satellite Workshop on Words and Automata of the International Computer Science Symposium in Russia (CSR'06), St. Petersburg, 2006. Basically a ternary counter is simulated.
  2. Every n-state directable partial automaton possesses a directing word of length O(n2·4n/3). See here: http://dl.acm.org/citation.cfm?id=1570791
My intuition says 3n/3  is closer to the solution..